AWS Certified Generative AI Developer - Professional (AIP-C01) Exam Guide
Version 1.0 AIP-C01
Introduction
The AWS Certified Generative AI Developer - Professional (AIP-C01) exam is intended for individuals who perform a developer role. This exam validates a candidate's ability to develop, test, deploy, and troubleshoot generative AI applications by using AWS services and tools.
The exam also validates a candidate's ability to complete the following tasks:
- Integrate foundation models (FMs) and manage data for generative AI applications.
- Implement and integrate generative AI solutions by using AWS services and tools.
- Ensure AI safety, security, and governance for generative AI applications.
- Optimize the operational efficiency and performance of generative AI applications.
- Test, validate, and troubleshoot generative AI applications.
Target Candidate Description
The target candidate has experience developing and maintaining generative AI applications by using AWS services and tools. The target candidate also has experience with the following areas:
- Proficiency in programming languages such as Python or JavaScript/TypeScript
- Understanding of generative AI fundamentals, including transformer architectures and FM capabilities
- Hands-on experience with AWS AI/ML services, including Amazon Bedrock, Amazon SageMaker, and AWS Lambda
- Knowledge of software development best practices, including version control, testing, and CI/CD pipelines
- Experience with API development and integration
- Understanding of cloud computing concepts and AWS core services
Recommended AWS Knowledge
The target candidate should have the following knowledge and experience:
- 2 or more years of experience developing applications on AWS
- Hands-on experience with Amazon Bedrock or Amazon SageMaker for generative AI development
- Experience with AWS SDKs (for example, AWS SDK for Python [Boto3]) and AWS CLI for programmatic interactions
- Knowledge of containerization and serverless technologies on AWS
- Understanding of networking concepts in AWS (for example, Amazon VPC, security groups, AWS PrivateLink)
- Familiarity with monitoring and logging services (for example, Amazon CloudWatch, AWS CloudTrail)
Exam Content
Question Types
The exam contains one or more of the following question types:
- Multiple choice: Has one correct response and three incorrect responses (distractors).
- Multiple response: Has two or more correct responses out of five or more response options.
Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 65 questions that affect your score.
Unscored Content
The exam includes 20 unscored questions that do not affect your score. AWS collects information about performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.
Exam Results
The AWS Certified Generative AI Developer - Professional (AIP-C01) exam has a pass or fail designation. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.
Your results for the exam are reported as a scaled score of 100-1,000. The minimum passing score is 750. Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.
Your score report could contain a table of classifications of your performance at each section level. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.
Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table of classifications contains general information that highlights your strengths and weaknesses. Use caution when you interpret section-level feedback.
Content Outline
This exam guide includes weightings, content domains, and task statements for the exam. This guide does not provide a comprehensive list of the content on the exam. However, additional context for each task statement is available to help you prepare for the exam.
The exam has the following content domains and weightings:
- Domain 1: Foundation Model Integration, Data Management, and Compliance (31% of scored content)
- Domain 2: Implementation and Integration (26% of scored content)
- Domain 3: AI Safety, Security, and Governance (20% of scored content)
- Domain 4: Operational Efficiency and Optimization for GenAI Applications (12% of scored content)
- Domain 5: Testing, Validation, and Troubleshooting (11% of scored content)
Domain 1: Foundation Model Integration, Data Management, and Compliance
Task Statement 1.1: Analyze requirements and design generative AI solutions.
Knowledge of:
- Generative AI application architecture patterns (for example, multi-tier architectures, microservices, event-driven architectures)
- AWS services for generative AI application development (for example, Amazon Bedrock, Amazon SageMaker, AWS Lambda, Amazon ECS, Amazon EKS)
- Design considerations for generative AI applications (for example, scalability, availability, performance, cost, security)
- Data flow patterns for generative AI applications (for example, synchronous, asynchronous, streaming, batch)
- Integration patterns for generative AI applications (for example, API Gateway, message queues, event buses)
Skills in:
- Translating business requirements into technical architectures for generative AI solutions
- Selecting appropriate AWS services and features for generative AI application components
- Designing data flow and integration patterns for generative AI applications
- Evaluating trade-offs between different architectural approaches (for example, cost vs. performance, latency vs. throughput)
- Designing solutions that meet non-functional requirements (for example, security, compliance, availability)
Task Statement 1.2: Select and configure foundation models.
Knowledge of:
- FM types and their capabilities (for example, text generation, code generation, image generation, multi-modal models, embedding models)
- FM selection criteria (for example, model size, latency, cost, accuracy, context window, language support, customization options)
- Amazon Bedrock model providers and their available models (for example, Anthropic Claude, Amazon Titan, Meta Llama, Cohere, Stability AI, Mistral AI)
- Model customization techniques (for example, fine-tuning, continued pre-training, prompt engineering, Retrieval Augmented Generation [RAG])
- Model inference parameters and their effects (for example, temperature, top_p, top_k, max tokens, stop sequences)
- Model versioning and lifecycle management
Skills in:
- Evaluating and selecting FMs based on use case requirements
- Configuring model inference parameters to optimize output quality
- Implementing model access and permissions by using Amazon Bedrock or Amazon SageMaker
- Comparing model performance across different providers and model families
- Configuring model throughput and provisioned capacity
- Managing model versions and updates
Task Statement 1.3: Implement data validation and processing pipelines for FM consumption.
Knowledge of:
- Data preprocessing techniques for FM inputs (for example, text cleaning, tokenization, normalization, chunking strategies)
- Data validation methods and frameworks (for example, schema validation, data quality checks, input sanitization)
- Data transformation patterns for different FM input types (for example, text, images, structured data)
- AWS services for data processing (for example, AWS Lambda, AWS Glue, Amazon S3, AWS Step Functions)
- Data format requirements for different FMs and APIs
- Chunking strategies and their trade-offs (for example, fixed-size, semantic, recursive, document-based)
Skills in:
- Building data preprocessing pipelines for FM inputs
- Implementing data validation and quality checks
- Transforming data into formats suitable for FM consumption
- Implementing error handling and retry logic for data pipelines
- Optimizing data processing for cost and performance
- Implementing data versioning and lineage tracking
Task Statement 1.4: Design and implement vector store solutions.
Knowledge of:
- Vector database concepts (for example, embeddings, similarity search, distance metrics, indexing algorithms)
- AWS services for vector storage (for example, Amazon OpenSearch Service, Amazon Aurora PostgreSQL with pgvector, Amazon Neptune, Amazon DocumentDB, Amazon MemoryDB)
- Embedding model selection and configuration (for example, Amazon Titan Embeddings, Cohere Embed)
- Vector indexing strategies (for example, HNSW, IVF, flat indexing) and their trade-offs
- Dimensionality considerations and their impact on performance and storage
- Vector store scaling and performance optimization
Skills in:
- Selecting and configuring appropriate vector store solutions based on requirements
- Generating and managing embeddings by using embedding models
- Implementing vector indexing and search configurations
- Optimizing vector store performance (for example, index tuning, sharding, caching)
- Implementing vector store data ingestion pipelines
- Managing vector store lifecycle operations (for example, updates, deletions, re-indexing)
Task Statement 1.5: Design retrieval mechanisms for FM augmentation.
Knowledge of:
- RAG architecture patterns and components (for example, retrieval, augmentation, generation)
- Amazon Bedrock Knowledge Bases configuration and management
- Retrieval strategies (for example, semantic search, keyword search, hybrid search)
- Context window management and optimization
- Relevance scoring and re-ranking methods
- Data source connectors and integration patterns (for example, Amazon S3, web crawlers, databases, Confluence, SharePoint)
Skills in:
- Designing and implementing RAG solutions by using Amazon Bedrock Knowledge Bases
- Configuring data source connectors and sync schedules
- Implementing retrieval strategies that optimize relevance and accuracy
- Managing context window utilization for effective augmentation
- Implementing metadata filtering and search refinement
- Evaluating and tuning retrieval performance
Task Statement 1.6: Implement prompt engineering strategies and governance for FM interactions.
Knowledge of:
- Prompt engineering techniques (for example, zero-shot, few-shot, chain-of-thought, role-based, system prompts)
- Prompt templates and parameterization strategies
- Prompt governance and management best practices
- Prompt versioning and lifecycle management
- Prompt injection attacks and mitigation strategies
- Amazon Bedrock prompt management features (for example, prompt flows, prompt versions)
Skills in:
- Designing and implementing effective prompts for different use cases
- Creating reusable prompt templates with parameterization
- Implementing prompt governance frameworks (for example, review processes, approval workflows)
- Managing prompt versions and deployments
- Testing and evaluating prompt effectiveness
- Implementing prompt guardrails and safety measures
Domain 2: Implementation and Integration
Task Statement 2.1: Implement agentic AI solutions and tool integrations.
Knowledge of:
- Agentic AI concepts and architectures (for example, autonomous agents, multi-agent systems, tool use, reasoning loops)
- Amazon Bedrock Agents configuration and management (for example, action groups, knowledge bases, guardrails)
- Tool and API integration patterns for agents (for example, AWS Lambda functions, API definitions, OpenAPI schemas)
- Agent orchestration strategies (for example, sequential, parallel, hierarchical)
- Agent memory and state management
- Multi-agent collaboration patterns
Skills in:
- Designing and implementing AI agents by using Amazon Bedrock Agents
- Configuring agent action groups and tool integrations
- Implementing agent orchestration workflows
- Managing agent state and conversation context
- Integrating agents with external APIs and services
- Testing and debugging agent behavior
Task Statement 2.2: Implement model deployment strategies.
Knowledge of:
- Model deployment options on AWS (for example, Amazon Bedrock on-demand, provisioned throughput, Amazon SageMaker endpoints)
- Deployment strategies (for example, blue/green, canary, rolling, A/B testing)
- Model hosting configurations (for example, instance types, auto-scaling, multi-model endpoints)
- Containerization for model deployment (for example, custom inference containers, Amazon ECR)
- Serverless deployment options (for example, AWS Lambda, Amazon Bedrock)
- Model deployment pipeline automation (for example, AWS CodePipeline, AWS CodeBuild)
Skills in:
- Deploying models by using Amazon Bedrock and Amazon SageMaker
- Configuring auto-scaling for model endpoints
- Implementing deployment strategies for model updates
- Building CI/CD pipelines for model deployment
- Managing model deployment configurations and environments
- Monitoring deployment health and rollback procedures
Task Statement 2.3: Design and implement enterprise integration architectures.
Knowledge of:
- Enterprise integration patterns for generative AI (for example, API Gateway patterns, event-driven architectures, message queuing)
- AWS integration services (for example, Amazon API Gateway, Amazon EventBridge, Amazon SQS, Amazon SNS, AWS Step Functions)
- Authentication and authorization patterns (for example, Amazon Cognito, IAM, API keys, OAuth)
- Rate limiting and throttling strategies
- Caching strategies for generative AI applications (for example, Amazon ElastiCache, Amazon CloudFront)
- Cross-account and cross-region architectures
Skills in:
- Designing API architectures for generative AI applications
- Implementing authentication and authorization for API access
- Configuring rate limiting and throttling for API endpoints
- Implementing caching strategies to optimize performance and reduce costs
- Building event-driven architectures for asynchronous generative AI workflows
- Implementing cross-account and cross-region access patterns
Task Statement 2.4: Implement FM API integrations.
Knowledge of:
- Amazon Bedrock APIs (for example, InvokeModel, InvokeModelWithResponseStream, Converse, ConverseStream)
- AWS SDKs for generative AI development (for example, AWS SDK for Python [Boto3], AWS SDK for JavaScript)
- API request and response formats for different model providers
- Streaming response handling patterns
- Error handling and retry strategies for FM APIs
- Token management and usage tracking
Skills in:
- Implementing FM API calls by using AWS SDKs
- Handling streaming responses from FM APIs
- Implementing error handling, retry logic, and circuit breakers
- Managing API authentication and credentials
- Implementing token usage tracking and management
- Optimizing API calls for latency and throughput
Task Statement 2.5: Implement application integration patterns and development tools.
Knowledge of:
- Generative AI application development frameworks and libraries (for example, LangChain, LlamaIndex, Amazon Bedrock SDK)
- Application integration patterns (for example, chatbots, document processing, code generation, content creation)
- Conversation management patterns (for example, session management, conversation history, context management)
- AWS development tools for generative AI (for example, Amazon Q Developer, AWS Cloud9, AWS CloudShell)
- Infrastructure as code (IaC) for generative AI applications (for example, AWS CloudFormation, AWS CDK)
- Logging and observability integration (for example, Amazon CloudWatch, AWS X-Ray)
Skills in:
- Building generative AI applications by using frameworks and libraries
- Implementing conversation management and session handling
- Integrating generative AI capabilities into existing applications
- Using IaC to deploy generative AI infrastructure
- Implementing logging and observability for generative AI applications
- Using AWS development tools for generative AI application development
Domain 3: AI Safety, Security, and Governance
Task Statement 3.1: Implement input and output safety controls.
Knowledge of:
- Input validation and sanitization techniques for generative AI (for example, prompt injection detection, content filtering, input length validation)
- Output safety controls (for example, content filtering, toxicity detection, PII detection, hallucination mitigation)
- Amazon Bedrock Guardrails configuration and management (for example, content filters, denied topics, word filters, sensitive information filters, contextual grounding)
- Safety testing methodologies (for example, red teaming, adversarial testing, boundary testing)
- Content moderation services and techniques (for example, Amazon Comprehend, Amazon Rekognition for image moderation)
Skills in:
- Configuring Amazon Bedrock Guardrails for content safety
- Implementing input validation and sanitization pipelines
- Implementing output filtering and post-processing for safety
- Designing and executing safety testing procedures
- Implementing content moderation workflows
- Monitoring and responding to safety violations
Task Statement 3.2: Implement data security and privacy controls.
Knowledge of:
- Data encryption at rest and in transit for generative AI applications (for example, AWS KMS, TLS/SSL)
- Data access control mechanisms (for example, IAM policies, resource-based policies, VPC endpoints, AWS PrivateLink)
- PII detection and handling (for example, Amazon Comprehend PII detection, data masking, tokenization)
- Data residency and sovereignty requirements
- Secure data storage patterns for generative AI (for example, encrypted S3 buckets, encrypted vector stores)
- Network security for generative AI applications (for example, VPC configurations, security groups, network ACLs)
Skills in:
- Implementing encryption for data at rest and in transit
- Configuring access controls for generative AI resources and data
- Implementing PII detection and redaction pipelines
- Configuring network security for generative AI applications
- Implementing data lifecycle management and retention policies
- Configuring VPC endpoints and AWS PrivateLink for private access to AI services
Task Statement 3.3: Implement AI governance and compliance mechanisms.
Knowledge of:
- AI governance frameworks and best practices (for example, model cards, documentation requirements, audit trails)
- AWS services for governance and compliance (for example, AWS CloudTrail, AWS Config, AWS Audit Manager, Amazon CloudWatch)
- Model inventory and lifecycle management
- Compliance requirements for AI systems (for example, data privacy regulations, industry-specific requirements)
- Logging and auditing requirements for generative AI applications
- Change management processes for AI models and applications
Skills in:
- Implementing audit trails and logging for generative AI applications
- Configuring AWS services for governance and compliance monitoring
- Creating and maintaining model documentation (for example, model cards, data sheets)
- Implementing compliance controls for data privacy and regulatory requirements
- Managing model inventories and lifecycle tracking
- Implementing change management processes for AI systems
Task Statement 3.4: Implement responsible AI principles.
Knowledge of:
- Responsible AI principles (for example, fairness, transparency, accountability, explainability, safety, privacy)
- Bias detection and mitigation techniques (for example, Amazon SageMaker Clarify, data auditing, model evaluation)
- Transparency and explainability methods for generative AI (for example, model documentation, output attribution, source citation)
- Human-in-the-loop patterns (for example, Amazon Augmented AI [Amazon A2I], review workflows, feedback loops)
- Environmental considerations for AI workloads (for example, compute optimization, carbon footprint awareness)
Skills in:
- Implementing bias detection and mitigation strategies
- Designing transparency and explainability features for generative AI applications
- Implementing human-in-the-loop workflows for AI oversight
- Designing feedback mechanisms to improve model fairness and accuracy
- Implementing source citation and attribution for RAG-based applications
- Monitoring and reporting on responsible AI metrics
Domain 4: Operational Efficiency and Optimization for GenAI Applications
Task Statement 4.1: Implement cost optimization and resource efficiency strategies.
Knowledge of:
- Cost components of generative AI applications (for example, token-based pricing, compute costs, storage costs, data transfer costs)
- Cost optimization strategies for FM usage (for example, model selection, prompt optimization, caching, batching, provisioned throughput)
- AWS pricing models for generative AI services (for example, Amazon Bedrock on-demand pricing, provisioned throughput pricing, Amazon SageMaker pricing)
- Resource right-sizing for generative AI workloads
- AWS cost management tools (for example, AWS Cost Explorer, AWS Budgets, AWS Cost and Usage Reports)
Skills in:
- Implementing token usage optimization strategies (for example, prompt compression, response length management)
- Configuring provisioned throughput for predictable workloads
- Implementing caching strategies to reduce API calls and costs
- Monitoring and analyzing cost patterns for generative AI applications
- Implementing cost allocation and tagging strategies
- Right-sizing compute resources for generative AI workloads
Task Statement 4.2: Optimize application performance.
Knowledge of:
- Performance optimization techniques for generative AI applications (for example, caching, batching, streaming, asynchronous processing)
- Latency optimization strategies (for example, model selection, inference parameter tuning, edge caching, connection pooling)
- Throughput optimization strategies (for example, auto-scaling, load balancing, concurrent request management)
- Amazon Bedrock performance features (for example, provisioned throughput, model invocation logging, batch inference)
- Amazon SageMaker performance optimization (for example, instance type selection, auto-scaling, multi-model endpoints)
Skills in:
- Implementing caching and batching strategies for performance optimization
- Configuring auto-scaling for generative AI workloads
- Implementing streaming responses for improved user experience
- Optimizing model inference parameters for performance
- Implementing load balancing and traffic management
- Profiling and benchmarking generative AI application performance
Task Statement 4.3: Implement monitoring systems for generative AI applications.
Knowledge of:
- Monitoring strategies for generative AI applications (for example, operational metrics, model performance metrics, business metrics)
- AWS monitoring services (for example, Amazon CloudWatch, AWS CloudTrail, AWS X-Ray)
- Amazon Bedrock monitoring features (for example, model invocation logging, CloudWatch metrics)
- Alerting and notification strategies (for example, Amazon CloudWatch alarms, Amazon SNS)
- Logging best practices for generative AI applications (for example, structured logging, log aggregation, log retention)
- Observability patterns for distributed generative AI systems
Skills in:
- Configuring Amazon CloudWatch metrics and dashboards for generative AI applications
- Implementing model invocation logging and analysis
- Setting up alerting and notification systems for operational issues
- Implementing distributed tracing for generative AI applications
- Designing monitoring dashboards for operational and business metrics
- Implementing log aggregation and analysis workflows
Domain 5: Testing, Validation, and Troubleshooting
Task Statement 5.1: Implement evaluation systems for generative AI.
Knowledge of:
- Evaluation methodologies for generative AI (for example, automated evaluation, human evaluation, benchmark datasets)
- Evaluation metrics for text generation (for example, ROUGE, BLEU, BERTScore, perplexity, semantic similarity)
- Evaluation metrics for RAG systems (for example, retrieval accuracy, answer relevance, faithfulness, context relevance)
- Amazon Bedrock evaluation features (for example, model evaluation jobs, human evaluation workflows)
- A/B testing and experimentation frameworks for generative AI
- Evaluation data management (for example, golden datasets, test suites, evaluation harnesses)
Skills in:
- Designing and implementing automated evaluation pipelines
- Configuring Amazon Bedrock model evaluation jobs
- Implementing human evaluation workflows
- Creating and managing evaluation datasets and test suites
- Implementing A/B testing for generative AI features
- Analyzing evaluation results and making data-driven decisions
Task Statement 5.2: Troubleshoot generative AI applications.
Knowledge of:
- Common failure modes in generative AI applications (for example, hallucinations, context window limitations, token limits, rate limiting, timeout errors)
- Debugging techniques for generative AI applications (for example, log analysis, trace analysis, prompt debugging, response analysis)
- AWS troubleshooting tools and services (for example, Amazon CloudWatch Logs, AWS X-Ray, AWS CloudTrail)
- Error handling patterns for generative AI applications (for example, retry strategies, circuit breakers, fallback mechanisms, graceful degradation)
- Performance troubleshooting (for example, latency analysis, throughput bottleneck identification, resource utilization analysis)
Skills in:
- Diagnosing and resolving common generative AI application issues
- Using AWS monitoring and logging tools for troubleshooting
- Implementing error handling and recovery mechanisms
- Debugging prompt-related issues (for example, unexpected outputs, inconsistent responses)
- Troubleshooting integration issues (for example, API errors, authentication failures, network connectivity)
- Analyzing and resolving performance bottlenecks
Appendix
Technologies and Concepts That Might Appear on the Exam
The following list contains technologies and concepts that might appear on the exam. This list is non-exhaustive and is subject to change. The order and placement of the items in this list are not indicative of their relative weight or importance on the exam:
- Transformer architectures and attention mechanisms
- Foundation models (FMs) and large language models (LLMs)
- Multi-modal models (for example, text, image, audio, video)
- Embedding models and vector representations
- Tokenization and token management
- Retrieval Augmented Generation (RAG)
- Prompt engineering (for example, zero-shot, few-shot, chain-of-thought)
- Fine-tuning and continued pre-training
- Reinforcement learning from human feedback (RLHF)
- Agentic AI and autonomous agents
- Tool use and function calling
- Vector databases and similarity search
- Chunking strategies (for example, fixed-size, semantic, recursive)
- Context window management
- Streaming inference
- Model evaluation metrics (for example, ROUGE, BLEU, BERTScore, perplexity)
- Guardrails and content filtering
- Prompt injection and adversarial attacks
- PII detection and redaction
- Model cards and documentation
- Responsible AI principles (for example, fairness, transparency, accountability)
- CI/CD for generative AI applications
- Infrastructure as code (IaC)
- Containerization and serverless computing
- API design and management
- Event-driven architectures
- Caching strategies
- Cost optimization for token-based pricing
- Monitoring and observability
- A/B testing and experimentation
In-scope AWS Services and Features
The following list contains AWS services and features that are in scope for the exam. This list is non-exhaustive and is subject to change. AWS offerings appear in categories that align with the offerings' primary functions:
Analytics:
- Amazon Athena
- Amazon Kinesis
- Amazon OpenSearch Service
- AWS Glue
Compute:
- Amazon EC2
- Amazon EC2 Auto Scaling
- AWS Lambda
Containers:
- Amazon Elastic Container Registry (Amazon ECR)
- Amazon Elastic Container Service (Amazon ECS)
- Amazon Elastic Kubernetes Service (Amazon EKS)
- AWS Fargate
Database:
- Amazon Aurora
- Amazon DocumentDB (with MongoDB compatibility)
- Amazon DynamoDB
- Amazon ElastiCache
- Amazon MemoryDB
- Amazon Neptune
- Amazon RDS
Developer Tools:
- AWS Cloud Development Kit (AWS CDK)
- AWS CloudFormation
- AWS CodeBuild
- AWS CodePipeline
- Amazon Q Developer
Machine Learning:
- Amazon Augmented AI (Amazon A2I)
- Amazon Bedrock
- Amazon Bedrock Agents
- Amazon Bedrock Guardrails
- Amazon Bedrock Knowledge Bases
- Amazon Comprehend
- Amazon Rekognition
- Amazon SageMaker
- Amazon Textract
- Amazon Transcribe
- Amazon Translate
Management and Governance:
- AWS CloudTrail
- Amazon CloudWatch
- Amazon CloudWatch Logs
- AWS Config
- Amazon EventBridge
- AWS Systems Manager
- AWS Trusted Advisor
- AWS Well-Architected Tool
- AWS X-Ray
Networking and Content Delivery:
- Amazon API Gateway
- Amazon CloudFront
- Elastic Load Balancing (ELB)
- Amazon VPC
- AWS PrivateLink
Security, Identity, and Compliance:
- AWS Audit Manager
- Amazon Cognito
- AWS Identity and Access Management (IAM)
- AWS Key Management Service (AWS KMS)
- Amazon Macie
- AWS Secrets Manager
- AWS Security Token Service (AWS STS)
- AWS WAF
Storage:
- Amazon Elastic Block Store (Amazon EBS)
- Amazon Elastic File System (Amazon EFS)
- Amazon S3
- Amazon S3 Glacier
Application Integration:
- Amazon Simple Notification Service (Amazon SNS)
- Amazon Simple Queue Service (Amazon SQS)
- AWS Step Functions
Out-of-scope AWS Services and Features
The following list contains AWS services and features that are out of scope for the exam. This list is non-exhaustive and is subject to change. AWS offerings that are entirely unrelated to the target job roles for the exam are excluded from this list:
Analytics:
- AWS Clean Rooms
- Amazon CloudSearch
- AWS Data Exchange
- Amazon Data Firehose
- Amazon FinSpace
- AWS Lake Formation
- Amazon Managed Streaming for Apache Kafka (Amazon MSK)
- Amazon QuickSight
- Amazon Redshift
Application Integration:
- Amazon AppFlow
- Amazon MQ
- Amazon Simple Workflow Service (Amazon SWF)
Business Applications:
- Amazon Chime
- Amazon Connect
- Amazon Pinpoint
- Amazon Simple Email Service (Amazon SES)
- AWS Supply Chain
- AWS Wickr
- Amazon WorkDocs
- Amazon WorkMail
Cloud Financial Management:
- AWS Application Cost Profiler
- AWS Billing Conductor
- AWS Budgets
- AWS Cost Explorer
- AWS Marketplace
Compute:
- AWS App Runner
- AWS Batch
- AWS Elastic Beanstalk
- EC2 Image Builder
- Amazon Lightsail
Containers:
- Red Hat OpenShift Service on AWS (ROSA)
Customer Enablement:
- AWS IQ
- AWS Managed Services (AMS)
- AWS re:Post Private
- AWS Support
Database:
- Amazon Keyspaces (for Apache Cassandra)
- Amazon Quantum Ledger Database (Amazon QLDB)
- Amazon Timestream
Developer Tools:
- AWS AppConfig
- AWS Application Composer
- AWS Cloud9
- AWS CloudShell
- Amazon CodeCatalyst
- AWS CodeStar
- AWS Fault Injection Service
End User Computing:
- Amazon AppStream 2.0
- Amazon WorkSpaces
- Amazon WorkSpaces Thin Client
- Amazon WorkSpaces Web
Frontend Web and Mobile:
- AWS Amplify
- AWS AppSync
- AWS Device Farm
- Amazon Location Service
Internet of Things (IoT):
- AWS IoT Analytics
- AWS IoT Core
- AWS IoT Device Defender
- AWS IoT Device Management
- AWS IoT Events
- AWS IoT FleetWise
- FreeRTOS
- AWS IoT Greengrass
- AWS IoT SiteWise
- AWS IoT TwinMaker
Machine Learning:
- Amazon Bedrock Studio
- AWS DeepComposer
- AWS DeepRacer
- Amazon Forecast
- Amazon Fraud Detector
- AWS HealthImaging
- AWS HealthOmics
- Amazon Kendra
- Amazon Lex
- Amazon Lookout for Equipment
- Amazon Lookout for Metrics
- Amazon Lookout for Vision
- Amazon Monitron
- AWS Panorama
- Amazon Personalize
- Amazon Polly
- Amazon Q Business
Management and Governance:
- AWS Control Tower
- AWS Health Dashboard
- AWS Launch Wizard
- AWS License Manager
- Amazon Managed Grafana
- Amazon Managed Service for Prometheus
- AWS OpsWorks
- AWS Organizations
- AWS Proton
- AWS Resilience Hub
- AWS Resource Explorer
- AWS Resource Groups
- AWS Systems Manager Incident Manager
- AWS Service Catalog
- Service Quotas
Media:
- Amazon Elastic Transcoder
- AWS Elemental MediaConnect
- AWS Elemental MediaConvert
- AWS Elemental MediaLive
- AWS Elemental MediaPackage
- AWS Elemental MediaStore
- AWS Elemental MediaTailor
- Amazon Interactive Video Service (Amazon IVS)
Migration and Transfer:
- AWS Application Discovery Service
- AWS Application Migration Service
- AWS Database Migration Service (AWS DMS)
- AWS DataSync
- AWS Mainframe Modernization
- AWS Migration Hub
- AWS Snow Family
- AWS Transfer Family
Networking and Content Delivery:
- AWS App Mesh
- AWS Cloud Map
- AWS Direct Connect
- AWS Global Accelerator
- AWS Private 5G
- Amazon Route 53
- Amazon VPC IP Address Manager (IPAM)
Security, Identity, and Compliance:
- AWS Artifact
- AWS Certificate Manager (ACM)
- AWS CloudHSM
- Amazon Detective
- AWS Directory Service
- AWS Firewall Manager
- Amazon GuardDuty
- AWS IAM Identity Center
- Amazon Inspector
- AWS Payment Cryptography
- AWS Private Certificate Authority
- AWS Resource Access Manager (AWS RAM)
- AWS Security Hub
- Amazon Security Lake
- AWS Shield
- AWS Signer
- Amazon Verified Permissions
Storage:
- AWS Backup
- AWS Elastic Disaster Recovery
Survey
How useful was this exam guide? Let us know by taking our survey.