pRODUCT dATAMARKET

Find, access, and take action on data in seconds

Generative AI-powered catalog for all your data products, no matter where they're located

Data Trust process
use your data

If you can’t find and use your data,
what’s the point?

DataMarket is built to solve the problem of "where's that data?"
We help people find, access, and start using data as easily as they'd shop online.

DataMarket is the modern way to access and take action on data products

Build, manage, and govern data products that your users can easily find, access, and start using in minutes.

Imagine all your data products at your fingertips, easily searchable and actionable

Package Data Assets as Data Products
  • Connect to any Data? and package them as Data Product Track end-to-end lineage of Data Products
    Trust data products with Data Quality Scores.
    Build Semantic Layer Detect PII / PHI / PCI in every data product CICD approval process managing the Lifecycle of Data Products.
Governance
  • Create Policy & Enforce Policy (Coming soon..)
    Column Masking  
    Row Level Security Integration with Active Directory Groups Audit Reporting
Search & Discover Data Products
  • One shop stop to search and discover active / passive metadata for Data Products
    Explore data products at one place and subscribe to data products directly
    Shopping Cart experience to Data Products.
Data Virtualization
  • Create Live Data Products & Cache Data Products
    Federated Query engine supporting a wide variety of data sources
    True Pushdown leveraging source system processing power.
Generative AI
  • Data Exploration on Data Products using Natural Language Questions.
    Write Automatic SQL and Python code for data analysis.
    Built in LLM to generate code for data analysis.
[delete me]

Find Data

  • DataTrust offers full set of applications to analyze source and target datasets.
  • Our "Query Builder" component and Data Profiling features help the stake holders to understand and analyze the data before using the corresponding datasets in the various validation and reconciliation scenarios available.

See Details of Data Products

  • Compare row counts between source and target dataset pairs and identifies the tables for which the row count is not matched
  • Row counts compare algorithm allows to compare row counts of multiple tables/views simultaneously
  • Best fit for database upgrade testing, bigdata ingest layer testing, data warehouse staging extract and load testing, master data testing

Govern & Gain Access to Data

  • Compares datasets between source and target and identifies the rows that are not matching
  • Field-level data compare algorithm allows data to be compared between multiple pairs of tables/views simultaneously
  • Best fit for database upgrade testing, big data ingest layer testing, data warehouse testing for objects with minimal transformations, production parallel testing, master data testing

Connect to Data

  • Use Key Data Statistics Studio (KDS) to test whether the data from prior to an upgrade matches with after upgrade (Technical Data Testing)
  • Perform bulk comparisons and compare more than one pair of datasets and exponentially speed-up the testing process for data integration, upgrade, data staging loads
  • Use Record Count Compare to quickly identify the row count differences between one or more tables/queries or Row Level Compare to compare data between one or more tables/queries across source and target
  • Create reconciliation scenarios to identify the records that are not matching from source to target but also the exact set of fields contributing to the mismatch

Converse with Data

  • Rule-based data validation engine with an easy to use interface to create validation scenarios
  • Define multiple validation rules against target data set and capture exceptions
  • Analyze and report on validations

[delete me]

  • Create validation rules against a dataset, execute rules and identify the records violating rules
  • Select a data source to be validated and define one or more validation rules, ingesting the dataset and executing the rules defined in the scenario and returning exceptions

The team was so excited that we were able to do it in a fraction of the time and so effectively.

$940K saved annually by automating data quality across nine data sources.

14 FTEs saved through automation. 60% reduction in time needed to test data.