1. Design and implement data models (35–40%)
Design and implement a non-relational data model for Azure Cosmos DB for NoSQL
Develop a design by storing multiple entity types in the same container
Develop a design by storing multiple related entities in the same document
Develop a model that denormalizes data across documents
Develop a design by referencing between documents
Identify partition key, id, and unique keys
Identify data and associated access patterns
Specify a default time to live (TTL) on a container for a transactional store
Develop a design for versioning documents
Develop a design for document schema versioning
Design a data partitioning strategy for Azure Cosmos DB for NoSQL
Choose a partitioning strategy based on a specific workload
Choose a partition key
Plan for transactions when choosing a partition key
Evaluate the cost of using a cross-partition query
Calculate and evaluate data distribution based on partition key selection
Calculate and evaluate throughput distribution based on partition key selection
Construct and implement a synthetic partition key
Design and implement a hierarchical partition key
Design partitioning for workloads that require multiple partition keys
Plan and implement sizing and scaling for Azure Cosmos DB
Evaluate the throughput and data storage requirements for a specific workload
Choose between serverless, provisioned and free tier
Choose when to use database-level provisioned throughput
Design for granular scale units and resource governance
Evaluate the cost of the global distribution of data
Configure throughput for Azure Cosmos DB by using the Azure portal
Implement client connectivity options
Choose a connectivity mode (gateway versus direct)
Implement a connectivity mode
Create a connection to a database
Enable offline development by using the Azure Cosmos DB emulator
Handle connection errors
Implement a singleton for the client
Specify a region for global distribution
Configure client-side threading and parallelism options
Enable SDK logging
Implement data access using SQL language
Implement queries that use arrays, nested objects, aggregation, and ordering
Implement a correlated subquery
Implement queries that use array and type-checking functions
Implement queries that use mathematical, string, and date functions
Implement queries based on variable data
Implement data access using SDKs
Choose when to use a point operation versus a query operation
Implement point operations to create, update, and delete items
Implement updates using patch operations
Manage multi-item transactions using Transactional Batch
Perform bulk operations using SDK Bulk Support
Implement optimistic concurrency using ETags
Override consistency using query request options
Implement session consistency using session tokens
Implement query operations with pagination
Implement query operations using continuation tokens
Handle transient errors and 429s
Specify TTL for an item
Retrieve and use query metrics
Implement server-side programming
Write, deploy, and call stored procedures
Design stored procedures for transactional operations within partitions
Implement and call triggers
Implement user-defined functions
2. Design and implement data distribution (5–10%)
Replication strategy
Choose when to distribute data
Define automatic failover policies
Perform manual failovers
Choose a consistency model
Identify use cases for consistency models
Evaluate impact on availability and RU cost
Evaluate impact on performance and latency
Specify application connections to replicated data
Multi-region writes
Choose when to use multi-region writes
Implement multi-region writes
Implement custom conflict resolution policies
3. Integrate an Azure Cosmos DB solution (5–10%)
Enable analytical workloads
Configure Azure Cosmos DB Mirroring for Microsoft Fabric
Choose between Mirroring and Spark connector
Enable analytical store on a container
Query analytical store using Synapse
Query transactional store from Spark
Write data back using Spark
Implement Change Data Capture
Implement time travel in Fabric Warehouse
Implement solutions across services
Integrate events using Azure Functions and Event Hubs
Denormalize data using Change Feed
Enforce referential integrity using Change Feed
Aggregate data using Change Feed
Archive data using Change Feed
Implement Azure AI Search integration
4. Optimize an Azure Cosmos DB solution (15–20%)
Optimize query performance
Adjust indexing policies
Calculate query cost
Retrieve RU cost
Implement integrated cache
Implement change feed solutions
Develop Azure Functions triggers
Consume change feed using SDK
Manage change feed instances
Implement denormalization
Implement referential enforcement
Implement aggregation persistence
Implement data archiving
Indexing strategy
Choose read-heavy vs write-heavy strategies
Choose index types
Configure indexing policies
Implement composite indexes
Optimize index performance
5. Maintain an Azure Cosmos DB solution (25–30%)
Monitoring and troubleshooting
Evaluate response status codes and failures
Monitor RU consumption
Monitor latency metrics
Monitor data replication
Configure alerts
Query resource logs
Monitor throughput across partitions
Monitor data distribution
Monitor security logs
Backup and restore
Choose backup type
Configure periodic backup
Configure continuous backup
Locate restore points
Restore databases or containers
Security implementation
Choose encryption key type
Configure network access control
Configure data encryption
Manage control plane access using RBAC
Manage data plane access using Entra ID
Configure CORS
Manage keys using Key Vault
Implement customer-managed keys
Implement Always Encrypted
Data movement
Choose data movement strategy
Use SDK bulk operations
Use Data Factory and Synapse
Use Kafka connector
Use Stream Analytics
Use Spark connector
Configure IoT Hub integration
DevOps implementation
Choose declarative vs imperative approaches
Use ARM templates
Manage throughput using PowerShell or CLI
Initiate regional failover
Maintain indexing policies using templates