1z0-449 Exam Questions Answers

Valid4sure 1z0-449 VCE Practice Test

New Updated 1z0-449 Exam Questions from valid4sure 1z0-449 PDF dumps! Welcome to download the newest valid4sure 1z0-449 VCE dumps: (72 Q&As)

  • Oracle Big Data 2017 Implementation Essentials certification exam

Keywords: 1z0-449 exam dumps, 1z0-449 exam questions, 1z0-449 VCE dumps, 1z0-449 PDF dumps, 1z0-449 practice tests, 1z0-449 study guide, 1z0-449 braindumps,

PS. New 1z0-449 dumps PDF – https://www.valid4sure.com/top/demo/Oracle/1z0-449.pdf

1z0-449 Practice Test Questions Answers – 1z0-449 braindump

QUESTION NO: 21

Your customer is using the IKM SQL to HDFS File (Sqoop) module to move data from Oracle to HDFS. However, the customer is experiencing performance issues.

 

What change should you make to the default configuration to improve performance?

 

A. Change the ODI configuration to high performance mode.

B. Increase the number of Sqoop mappers.

C. Add additional tables.

D. Change the HDFS server I/O settings to duplex mode.

 

Answer: B

 

QUESTION NO: 22

What is the result when a flume event occurs for the following single node configuration?

 

A. The event is written to memory.

B. The event is logged to the screen.

C. The event output is not defined in this section.

D. The event is sent out on port 44444.

E. The event is written to the netcat process.

 

Answer: B

 

QUESTION NO: 23

What kind of workload is MapReduce designed to handle?

 

A. batch processing

B. interactive

C. computational

D. real time

E. commodity

 

Answer: A

 

QUESTION NO: 24

Your customer uses LDAP for centralized user/group management.

 

How will you integrate permissions management for the customer’s Big Data Appliance into the existing architecture?

 

A. Make Oracle Identity Management for Big Data the single source of truth and point LDAP to its keystore for user lookup.

B. Enable Oracle Identity Management for Big Data and point its keystore to the LDAP directory for user lookup.

C. Make Kerberos the single source of truth and have LDAP use the Key Distribution Center for user lookup.

D. Enable Kerberos and have the Key Distribution Center use the LDAP directory for user lookup.

 

Answer: D

 

QUESTION NO: 25

Your customer collects diagnostic data from its storage systems that are deployed at customer sites. The customer needs to capture and process this data by country in batches.

 

Why should the customer choose Hadoop to process this data?

 

A. Hadoop processes data on large clusters (10-50 max) on commodity hardware.

B. Hadoop is a batch data processing architecture.

C. Hadoop supports centralized computing of large data sets on large clusters.

D. Node failures can be dealt with by configuring failover with clusterware.

E. Hadoop processes data serially.

 

Answer: B




 

QUESTION NO: 26

Your customer wants to architect a system that helps to make real-time recommendations to users based on their past search history.

 

Which solution should the customer use?

 

A. Oracle Container Database

B. Oracle Exadata

C. Oracle NoSQL

D. Oracle Data Integrator

 

Answer: D

 

QUESTION NO: 27

How should you control the Sqoop parallel imports if the data does not have a primary key?

A. by specifying no primary key with the --no-primary argument

B. by specifying the number of maps by using the –m option

C. by indicating the split size by using the --direct-split-size option

D. by choosing a different column that contains unique data with the --split-by argument

 

Answer: D

 

QUESTION NO: 28

Your customer uses Active Directory to manage user accounts. You are setting up Hadoop Security for the customer’s Big Data Appliance.

How will you integrate Hadoop and Active Directory?

A. Set up Kerberos’ Key Distribution Center to be the Active Directory keystore.

B. Configure Active Directory to use Kerberos’ Key Distribution Center.

C. Set up a one-way cross-realm trust from the Kerberos realm to the Active Directory realm.

D. Set up a one-way cross-realm trust from the Active Directory realm to the Kerberos realm.

 

Answer: C

 

QUESTION NO: 29

What is the main purpose of the Oracle Loader for Hadoop (OLH) Connector?

 

A. runs transformations expressed in XQuery by translating them into a series of MapReduce jobs that are executed in parallel on a Hadoop cluster

B. pre-partitions, sorts, and transforms data into an Oracle ready format on Hadoop and loads it into the Oracle database

C. accesses and analyzes data in place on HDFS by using external tables

D. performs scalable joins between Hadoop and Oracle Database data

E. provides a SQL-like interface to data that is stored in HDFS

F. is the single SQL point-of-entry to access all data

 

Answer: B

 

QUESTION NO: 30

Your customer has three XML files in HDFS with the following contents. Each XML file contains comments made by users on a specific day. Each comment can have zero or more “likes” from other users. The customer wants you to query this data and load it into the Oracle Database on Exadata.

How should you parse this data?

A. by creating a table in Hive and using MapReduce to parse the XML data by column

B. by configuring the Oracle SQL Connector for HDFS and parsing by using SerDe

C. by using the XML file module in the Oracle XQuery for Hadoop Connector

D. by using the built-in functions for reading JSON in the Oracle XQuery for Hadoop Connector

 

Answer: B

 

This website was created for free with Own-Free-Website.com. Would you also like to have your own website?
Sign up for free