BIGDATA Testing

BIGDATA Testing


  • Check proper data is pulled into system
  • Compare source data with the data landed in Hadoop
  • Check right data is extracted and loaded into the correct HDFS location

  • Map Reduce process works correctly
  • Data aggregation and segregation rules are implemented on the data.
  • Key value pairs are generated
  • Validating the data after post Map Reduce process.

Its pre-requisite for value creation from the Bigdata to have proper governance processes in place to ensure the quality of data. Comprehensive analysis and research of quality standards and quality assessment methods for big data are essential for the success of Bigdata programs.

Integrity, validity, completeness, quality of data is extremely vital for organization's growth and attaining overall business objectives. Big Data is increasingly getting popular today, as it empowers enterprises and top management folks to take informed decisions based on historical as well as contemporary data points. Testing these business critical applications helps you avoid duplicity and redundancy with the data sources.

RightData™ ’s Hadoop-powered scalable processing engine provides comprehensive capabilities for Big Data testing. RightData™ ’s scalable data testing engine allows the user to the easily create test scenarios between disparate systems and allows to automate the whole testing process and ensure the data ingested into Big Data platform, transformed using map reduce logic remains intact compared to its source.

Big data is a collection of large datasets that cannot be processed using traditional computing techniques. Big data relates to data creation, storage, retrieval and analysis that is remarkable in terms of volume, variety, and velocity. Big Data Testing can be broadly divided into three steps.

Data Staging Validation

The first step of big data testing also referred as pre-Hadoop stage involves process validation.

  • Data from various source like RDBMS, weblogs, social media, etc. should be validated to make sure that correct data is pulled into the system
  • Comparing source data with the data pushed into the Hadoop system to make sure they match
  • Verify the right data is extracted and loaded into the correct HDFS location

Map Reduce Validation

The second step is a validation of "MapReduce". In this stage, the tester verifies the business logic validation on every node and then validating them after running against multiple nodes, ensuring that

  • Map Reduce process works correctly
  • Data aggregation or segregation rules are implemented on the data
  • Key value pairs are generated
  • Validating the data after the Map-Reduce process

Output Validation

The final or third stage of Big Data testing is the output validation process. The output data files are generated and ready to be moved to an EDW (Enterprise Data Warehouse) or any other system based on the requirement.

Activities in the third stage include

  • To check the transformation rules are correctly applied
  • To check the data integrity and successful data load into the target system
  • To check that there is no data corruption by comparing the target data with the HDFS file system data