DP-203 Exam Questions and Answers (P2)

dp-203 exam questions

Welcome to your DP-203 Exam Questions and Answers (Part 2)

You are building an Azure Analytics query that will receive input data from Azure IoT Hub and write the results to Azure Blob storage. You need to calculate the difference in readings per sensor per hour. Which query should you use?

You have two Azure Data Factory instances named dev1 and prod1. dev1 connects to an Azure DevOps Git repository. You publish changes from the main branch of the Git repository to dev1. You need to deploy the artifacts from dev1 to prod1. What should you do first?

You are designing an Azure Stream Analytics job to process incoming events from sensors in retail environments. You need to process the events to produce a running average of shopper counts during the previous 15 minutes, calculated at five-minute intervals. Which type of window should you use?

You have an Azure Synapse Analytics dedicated SQL pool. You need to ensure that data in the pool is encrypted at rest. The solution must NOT require modifying applications that query the data. What should you do?

You plan to create an Azure Synapse Analytics dedicated SQL pool. You need to minimize the time it takes to identify queries that return confidential information as defined by the company's data privacy regulations and the users who executed the queues. Which two components should you include in the solution?

You have an Azure Synapse Analytics dedicated SQL pool that contains a large fact table. The table contains 50 columns and 5 billion rows and is a heap. Most queries against the table aggregate values from approximately 100 million rows and return only two columns. You discover that the queries against the fact table are very slow. Which type of index should you add to provide the fastest query times?

Columnstore tables generally won't push data into a compressed columnstore segment until there are more than .............. rows per table.

...................... is a simple, cost-effective solution for managing and scaling multiple databases that have varying and unpredictable usage demands.

...................... supports up to 100 TB of data and provides high throughput and performance, as well as rapid scaling to adapt to the workload requirements.

...................... is designed for customers looking to migrate a large number of apps from on- premises or IaaS, self-built, or ISV provided environment to fully managed PaaS cloud environment, with as low migration effort as possible.

The data engineering team manages Azure HDInsight clusters. The team spends a large amount of time creating and destroying clusters daily because most of the data pipeline process runs in minutes. You need to implement a solution that deploys multiple HDInsight clusters with minimal effort. What should you implement?

You are designing an Azure SQL Database that will use elastic pools. You plan to store data about customers in a table. Each record uses a value for CustomerID. What's the recommended strategy to partition data based on values in CustomerID?

Which statement is correct about Vertical Partitioning?

Which statement is correct about Horizontal Partitioning?

Which slowly changing dimension types is described in the below image

An example Type 1 SCD row that updates CompanyName and ModifiedDate.

Which slowly changing dimension types is described in the below image

An example Type 2 SCD row that shows a new record for Region change.

Which slowly changing dimension types is described in the below image

An example Type 3 SCD row that shows an updated CurrentEmail column and an unchanged OriginalEmail column.

Which slowly changing dimension types is described in the below image

An example Type 6 SCD row that shows a new record for Region change with CurrentRegion updated for old and new row.

............... is a column with a unique identifier for each row, and not generated from the table data, and the Data modelers like to create it on their tables when they design data warehouse models.

You need to design an Azure Synapse Analytics dedicated SQL pool that can return an employee record from a given point in time, maintains the latest employee information, and minimizes query complexity. How should you model the employee data?

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to Top