This presentation to the Marklogic User Group in London will focus on what the DPM is and how to exploit it within a semantic technology enabled environment.
Next Generation Data Models – How Data Point Modeling Really Works – Greg Soulsby, ModelDR
• How to develop a enterprise data model without complex extract, transform and load. You will learn how to kill the need for a data warehouse. You will see us use Open Gamma trading platform as an example of a complex source system.
• Why an enterprise wide data model using industry standards like FIBO and Data Point Modeling create an agile enterprise architecture, using MiFID II Trade Lifecycle reporting as the Use Case.
• How to run real time queries across your entire systems estate.We will use the Large Trade report as the example.
• How to support change, such as new reports, new view points by new actors and changing data schemas.
• Manage data quality by exploiting the Data Point Model approach to concerns such as Bitemporality, Governance, Timeliness, Completeness and Accuracy.
A Turning Point for your Data: how to think about your data in a NoSQL world
Thursday, Oct 22, 2015, 5:00 PM
Etc Venues, Norton Folgate – Liverpool Street,
Bishopsgate Court London, E1 6DQ, GB
45 Members Attending
MarkLogic’s schema-agnostic design lets you load data as-is and use it right away, without having to design a master schema that can accommodate all your data in advance.And you can get great value from MarkLogic without ever doing any data modeling at all. But if you want to get the most out of your data, it helps to understand what’s in it. Much…
Pulling together accurate reports that meet regulators changing demands is a clarion call for bodies and resources to “unwind data” and aggregate its metadata. Failure to do so could mean penalties, fines and loss of trust.
But more resources and endless ETL can actually create more data uncertainties and inaccuracies. In this 60 min webinar ModelDR’s Greg Soulsby and Simon Roberts joins MarkLogic colleagues Chris Atkinson, Rupert Brown and Diane Burley to discuss a new and sustainable data aggregation process.
Designed for data architects and managers, the webinar will equip you for further discussion with interest groups in your organization. You will learn key terms like:
- data point modeling
- congruent panoply
We hope you can join us for this show dont tell event.
Model driven solutions to regulation BCSB239
Wednesday 29th April 2015 (18:00 – 20:00)
The Club Room at the Old Bank of England, 194 Fleet Street, London EC4A 2LT
You will learn how to:
– Demonstrate compliance to each of the principles by re-purposing your information architecture
– Meet the obligations more efficiently by leveraging FIBO and semantic technology
– Show your management team how data architecture helps meet BCBS239 using his “BCSB239 Model driven solutions checklist” tool.
Presentation by Greg Soulsby.
Dodd Frank Record Keeping Rules – a lot of data for banks to untangle!
Bank systems have thousands of data attributes, delivered by hundreds of internal and external sources, all stored in dozens of unconnected databases. This fragmentation results in a continual challenge of mapping, cross-referencing and manual reconciliation, further exacerbated by the problem of common terms that have different meanings, common meanings that use different terms and vague definitions that are not captured in recognised financial taxonomies.
The Dodd Frank record keeping rules are directed towards full ‘end to end ‘ transaction monitoring and require an unambiguous data management strategy with full data ownership and accountability.
Dodd Frank record keeping requirements can only be met by data architecture that untangles all firms silo based data systems and harmonizes the data aggregation process. Systems must be ‘wired up’ to data architecture, designed in data point model format that has been updated with the latest regulatory taxonomies.
The Data Point Model ‘wiring up’ process is depicted in a static example form below.
ModelDR untangles the data, imports the regulatory taxonomies and builds a new design architecture. It creates a clear and complete view of ‘in house’ data and allows data quality testing consistent with current regulatory standards.
Dodd Frank Data Point Model