DP-420: Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB

DP-420: Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB

Offered by Linux Training

The DP-420: Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB course at Linux Training is designed for developers and IT professionals who want to build scalable, high-performance cloud-native applications using Azure Cosmos DB.

This course focuses on designing, developing, and optimizing applications that leverage the power of Azure Cosmos DB, a globally distributed, multi-model database service. Learners will gain practical experience in creating efficient data models, managing performance, and integrating Cosmos DB with modern cloud applications.


Course Overview

This program provides a comprehensive understanding of cloud-native application development using Azure Cosmos DB, enabling learners to build highly available and scalable applications with low latency and high throughput.


What You Will Learn

  • Introduction to Azure Cosmos DB
  • Designing Data Models for Cloud Applications
  • Working with Different APIs (SQL, MongoDB, etc.)
  • Data Partitioning and Indexing
  • Performance Optimization and Throughput Management
  • Security and Access Control
  • Monitoring and Troubleshooting

Why Choose This Course?

  • Industry-recognized certification (DP-420)
  • High-demand cloud and database skills
  • Hands-on practical training
  • Focus on real-world cloud application scenarios
  • Guidance from experienced trainers

Career Opportunities

After completing this course, you can explore roles such as:

  • Cloud Developer
  • Backend Developer
  • Azure Developer
  • Database Developer
  • Cloud Solutions Engineer

Who Can Join?

  • Developers and software engineers
  • IT professionals working with cloud technologies
  • Students interested in cloud application development
  • Anyone with basic programming and database knowledge

Build Scalable Cloud Applications with Cosmos DB

Join Linux Training and gain the skills needed to design and implement modern cloud-native applications using Azure Cosmos DB.

DP-420: Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB

Modules

1. Design and implement data models (35–40%)

Design and implement a non-relational data model for Azure Cosmos DB for NoSQL

  • Develop a design by storing multiple entity types in the same container
  • Develop a design by storing multiple related entities in the same document
  • Develop a model that denormalizes data across documents
  • Develop a design by referencing between documents
  • Identify partition key, id, and unique keys
  • Identify data and associated access patterns
  • Specify a default time to live (TTL) on a container for a transactional store
  • Develop a design for versioning documents
  • Develop a design for document schema versioning
  • Design a data partitioning strategy for Azure Cosmos DB for NoSQL

  • Choose a partitioning strategy based on a specific workload
  • Choose a partition key
  • Plan for transactions when choosing a partition key
  • Evaluate the cost of using a cross-partition query
  • Calculate and evaluate data distribution based on partition key selection
  • Calculate and evaluate throughput distribution based on partition key selection
  • Construct and implement a synthetic partition key
  • Design and implement a hierarchical partition key
  • Design partitioning for workloads that require multiple partition keys
  • Plan and implement sizing and scaling for Azure Cosmos DB

  • Evaluate the throughput and data storage requirements for a specific workload
  • Choose between serverless, provisioned and free tier
  • Choose when to use database-level provisioned throughput
  • Design for granular scale units and resource governance
  • Evaluate the cost of the global distribution of data
  • Configure throughput for Azure Cosmos DB by using the Azure portal
  • Implement client connectivity options

  • Choose a connectivity mode (gateway versus direct)
  • Implement a connectivity mode
  • Create a connection to a database
  • Enable offline development by using the Azure Cosmos DB emulator
  • Handle connection errors
  • Implement a singleton for the client
  • Specify a region for global distribution
  • Configure client-side threading and parallelism options
  • Enable SDK logging
  • Implement data access using SQL language

  • Implement queries that use arrays, nested objects, aggregation, and ordering
  • Implement a correlated subquery
  • Implement queries that use array and type-checking functions
  • Implement queries that use mathematical, string, and date functions
  • Implement queries based on variable data
  • Implement data access using SDKs

  • Choose when to use a point operation versus a query operation
  • Implement point operations to create, update, and delete items
  • Implement updates using patch operations
  • Manage multi-item transactions using Transactional Batch
  • Perform bulk operations using SDK Bulk Support
  • Implement optimistic concurrency using ETags
  • Override consistency using query request options
  • Implement session consistency using session tokens
  • Implement query operations with pagination
  • Implement query operations using continuation tokens
  • Handle transient errors and 429s
  • Specify TTL for an item
  • Retrieve and use query metrics
  • Implement server-side programming

  • Write, deploy, and call stored procedures
  • Design stored procedures for transactional operations within partitions
  • Implement and call triggers
  • Implement user-defined functions
  • 2. Design and implement data distribution (5–10%)

    Replication strategy

  • Choose when to distribute data
  • Define automatic failover policies
  • Perform manual failovers
  • Choose a consistency model
  • Identify use cases for consistency models
  • Evaluate impact on availability and RU cost
  • Evaluate impact on performance and latency
  • Specify application connections to replicated data
  • Multi-region writes

  • Choose when to use multi-region writes
  • Implement multi-region writes
  • Implement custom conflict resolution policies
  • 3. Integrate an Azure Cosmos DB solution (5–10%)

    Enable analytical workloads

  • Configure Azure Cosmos DB Mirroring for Microsoft Fabric
  • Choose between Mirroring and Spark connector
  • Enable analytical store on a container
  • Query analytical store using Synapse
  • Query transactional store from Spark
  • Write data back using Spark
  • Implement Change Data Capture
  • Implement time travel in Fabric Warehouse
  • Implement solutions across services

  • Integrate events using Azure Functions and Event Hubs
  • Denormalize data using Change Feed
  • Enforce referential integrity using Change Feed
  • Aggregate data using Change Feed
  • Archive data using Change Feed
  • Implement Azure AI Search integration
  • 4. Optimize an Azure Cosmos DB solution (15–20%)

    Optimize query performance

  • Adjust indexing policies
  • Calculate query cost
  • Retrieve RU cost
  • Implement integrated cache
  • Implement change feed solutions

  • Develop Azure Functions triggers
  • Consume change feed using SDK
  • Manage change feed instances
  • Implement denormalization
  • Implement referential enforcement
  • Implement aggregation persistence
  • Implement data archiving
  • Indexing strategy

  • Choose read-heavy vs write-heavy strategies
  • Choose index types
  • Configure indexing policies
  • Implement composite indexes
  • Optimize index performance
  • 5. Maintain an Azure Cosmos DB solution (25–30%)

    Monitoring and troubleshooting

  • Evaluate response status codes and failures
  • Monitor RU consumption
  • Monitor latency metrics
  • Monitor data replication
  • Configure alerts
  • Query resource logs
  • Monitor throughput across partitions
  • Monitor data distribution
  • Monitor security logs
  • Backup and restore

  • Choose backup type
  • Configure periodic backup
  • Configure continuous backup
  • Locate restore points
  • Restore databases or containers
  • Security implementation

  • Choose encryption key type
  • Configure network access control
  • Configure data encryption
  • Manage control plane access using RBAC
  • Manage data plane access using Entra ID
  • Configure CORS
  • Manage keys using Key Vault
  • Implement customer-managed keys
  • Implement Always Encrypted
  • Data movement

  • Choose data movement strategy
  • Use SDK bulk operations
  • Use Data Factory and Synapse
  • Use Kafka connector
  • Use Stream Analytics
  • Use Spark connector
  • Configure IoT Hub integration
  • DevOps implementation

  • Choose declarative vs imperative approaches
  • Use ARM templates
  • Manage throughput using PowerShell or CLI
  • Initiate regional failover
  • Maintain indexing policies using templates