Implicit contextual integrity in online social networks

Criado, Natalia and Such, Jose M. (2015) Implicit contextual integrity in online social networks. Information Sciences, 325. pp. 48-69. ISSN 0020-0255

[thumbnail of INS2015-InformationAssistantAgents]
Preview
PDF (INS2015-InformationAssistantAgents)
INS2015_InformationAssistantAgents.pdf - Accepted Version
Available under License Creative Commons Attribution-NonCommercial-NoDerivs.

Download (928kB)

Abstract

Many real incidents demonstrate that users of Online Social Networks need mechanisms that help them manage their interactions by increasing the awareness of the different contexts that coexist in Online Social Networks and preventing them from exchanging inappropriate information in those contexts or disseminating sensitive information from some contexts to others. Contextual integrity is a privacy theory that conceptualises the appropriateness of information sharing based on the contexts in which this information is to be shared. Computational models of Contextual Integrity assume the existence of well-defined contexts, in which individuals enact pre-defined roles and information sharing is governed by an explicit set of norms. However, contexts in Online Social Networks are known to be implicit, unknown a priori and ever changing; users relationships are constantly evolving; and the information sharing norms are implicit. This makes current Contextual Integrity models not suitable for Online Social Networks. In this paper, we propose the first computational model of Implicit Contextual Integrity, presenting an information model for Implicit Contextual Integrity as well as a so-called Information Assistant Agent that uses the information model to learn implicit contexts, relationships and the information sharing norms in order to help users avoid inappropriate information exchanges and undesired information disseminations. Through an experimental evaluation, we validate the properties of the model proposed. In particular, Information Assistant Agents are shown to: (i) infer the information sharing norms even if a small proportion of the users follow the norms and in presence of malicious users; (ii) help reduce the exchange of inappropriate information and the dissemination of sensitive information with only a partial view of the system and the information received and sent by their users; and (iii) minimise the burden to the users in terms of raising unnecessary alerts.

Item Type:
Journal Article
Journal or Publication Title:
Information Sciences
Additional Information:
12 month embargo This is the author’s version of a work that was accepted for publication in Information Sciences. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Information Sciences, 325, 2015 DOI:10.1016/j.ins.2015.07.013
Uncontrolled Keywords:
/dk/atira/pure/subjectarea/asjc/1700/1702
Subjects:
?? contextual integrityonline social networksnormsagents privacyartificial intelligencetheoretical computer sciencesoftwareinformation systems and managementcontrol and systems engineeringcomputer science applications ??
ID Code:
74526
Deposited By:
Deposited On:
07 Jul 2015 11:00
Refereed?:
Yes
Published?:
Published
Last Modified:
01 Oct 2024 00:11