Big Data Governance

Big Data Governance

However large and complex Big Data ultimately emerges to become in terms of data volume, velocity, variety and variability, Big Data Governance will in some important conceptual and actual dimensions be much larger. Data Governance will need to persist across the data lifecycle; at rest, in motion, in incomplete stages and transactions all the while serving the privacy and security of the young and the old, individuals as companies and companies as companies—to be an emergent force for good. It will need to insure economy, and innovation; enable freedom of action and individual and public welfare. It will need to rely on standards governing things we do not yet know while integrating the human element from our humanity with strange new interoperability capability. Data Governance will require new kinds and possibilities of perception yet accept that our current techniques are notoriously slow. For example, even as of today we have not yet scoped-in data types.

The reason we, so many of us, are gathering our energies and the multiplexity of our perspectives is that we know Big Data without Big Data Governance will be less likely to be a force for good. It may come to be said that the best use of Big Data is Big Data Governance.

What concept or concepts are powerful enough to organize, cohere and form an actionable way forward? Are we brave enough to push forward a few concepts for our discussion?  Some think data provenance, curation and conformance are the way forward. I agree with those that think this ground deserves a fifth V — Value.

What kind of Privacy is the IDESG creating?

Ryan, respectfully, I don’t agree with your plan forward. Also to further this discussion I have opened slots on the IDESG taxonomy wiki where we can capture this discussion and hopefully come to agreement on what we are trying to protect in the Identity

Image by Ann Racuya-Robbins Copyright 2012
Social Cooperation and Privacy

Ecosystem Framework.  I suggest human attributes but whatever is decided we need to build a reference model definition for others to be guided by. As I have already mentioned human attributes and personal information belong to the same domain namely human capability. It is human capability that we are trying to protect and encourage. While Bob’s Pinheiro’s definition has some merit it is too narrow to protect dynamic human capability and is already out of date as to human attributes not to mention the Big Data challenges. This is one reason among many why we need to claim and acknowledge the dynamic nature of human capabilities and align the human attributes these capabilities create with something more encompassing and universal such as human rights. Ryan you said “The IDESG is attempting to create an identity ecosystem framework intended to govern those identity service providers and RPs that voluntarily choose to adopt the rules, requirements, and standards embodied in that framework.” More than that we are working on creating and providing a certification program a “Trustmark” that will enhance the standing of an Identity Providers or Service Provider to potential customers. The IDESG is giving, or giving away something valuable. Distrust in cyberspace and online is growing. If IDESG creates a ”Trustmark” plan that misses the mark by a mile, trust will be even harder to re-establish. Ryan you said “We are not creating legislation and we are not going to regulate the entire internet and mobile world. So—at least to start—I suggest we begin by answering this question within the context of what we have begun to lay out as our target transaction; namely one which is authenticated and involves “personal information” (as suggested above). What security requirements should we seek to put into place to protect metadata in this instance? “ But the IDESG is creating a contract for compliance. The self attestation and assessment has requirements. This is exactly the right place to identify and make clear what we are trying to protect. Metadata can not be left to security alone but must have privacy protections. What is going to happened with the data in providers’ audit and security logs? Metadata is personal information or human attributes and must be protected by privacy whenever it exists. We have not reached agreement that our target transaction is authentication. We must begin with Registration because that is the touch point with human capabilities, human attributes and personal information. User managed access (UMA) while constructive is not enough of an answer. We can’t begin with access. If end users are able to evaluate providers based on informed valuation of what his or her human attributes are worth they will chose the partners that offer them a good value proposition. Any “Trustmark” that obscures that value will not be trusted. Regards, Ann Racuya-Robbins

The Human Trust Experience in an Era of Big Data

Consumer, Manager, Domain Expert Proposal
Subtopic: Unmet Big Data requirements

Ann Racuya-Robbins Image
tHTRX Logo graphic

1. Title
The Human Trust Experience (HTX) in an Era of Big Data

2. Point of Contact (Name, affiliation, email address, phone)
Ann Racuya-Robbins
World Knowledge Bank: Human Trust Experience Initiative

3. Working Group URL
https://www.humantrustexperience.net

4. Proposed panel topic: Unmet Big Data requirements

5. Abstract
The Human Trust Experience Initiative’s mission is to use Big Data to explore and lay the ground work for understanding the parameters, characteristics, attributes, information architecture, and reference and interaction models of the human trust experience in motion and at rest. Central premises of this work to be evaluated and interpreted are that:
• The human trust experience is foundational to Privacy, to the uptake of ICT innovation, education and the challenges of democratic governance.
• The human trust experience is a central component of all human labor and to individual and community well-being and survival.
• The human trust experience can be a measure and standard by which we understand and prioritize problem solving.

6. Working Group summary
• Create the human trust experience use case.
• Create the human trust experience context.
• Create a semiotics and information architecture of the human trust experience.
• Facilitate through CMS conversation about the tHTRX in a Big Data context.

7. Number of Participants, data working group began, frequency of meetings
December 2013

8. Target Audience
Individuals, Consumers and Producers of Big Data, Businesses, Government

9. Current initiatives
The Human Trust Experience Initiative

10. Specific Big Data Challenges:
Value, Valuation, Contextual Veracity, Identity, Pseudonymity, Anonymity, Privacy, Vetting, Contextual Vetting

11. Urgent research needs

12. Related Projects or Artifacts The Human Trust Experience: Informed Valuation Project

13. Big Data metrics (describe your data to make a Big impression)
Search, discovery, revelation, creation and analysis of the human trust experience from cyberspace data.

14. Keywords
human trust experience, value, valuation, informed valuation, informed contextual value, informed contextual valuation, contextual veracity, identity, pseudonymity, anonymity, privacy, risk management

Human Trust Experience Meets Big Data

Human Trust Experience Meets Big Data

Developing Standards for a Human Trust Experience in a Time of Big Data

Over the last two months I have been participating in the Big Data Technology Roadmap through the NIST Public Working Group for Big Data. I think one of the needs here is the development of “Standards for a Human Trust Experience in a Time of Big Data”.  I have requested to submit such a paper for a discussion group for the upcoming meeting in Washington DC.