Publications

Full List


  • Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning,
    In this paper, we focus on developing a novel mechanism to preserve differential privacy in deep neural networks, such that: (1) The privacy budget consumption is totally independent of the number of training steps; (2) It has the ability to adaptively inject noise into features based on the contribution of each to the output; and (3) It could be applied in a variety of different deep neural networks.

  • ?Importance Sketching of Influence Dynamics in Billion-scale Networks
    The blooming availability of traces for social, biological, and communication networks opens up unprecedented opportunities in analyzing diffusion processes in networks. However, the sheer sizes of the nowadays networks raise serious challenges in computational efficiency and scalability. In this paper, we propose a new hyper-graph sketching framework for influence dynamics in networks. The central of our sketching framework, called SKIS, is an efficient importance sampling algorithm that returns only non-singular reverse cascades in the network.

  • Preserving Differential Privacy in Convolutional Deep Belief Networks (Machine Learning Journal, S.I. ECML-PKDD 2017)
    The remarkable development of deep learning in medicine and healthcare domain presents obvious privacy issues, when deep neural networks are built on users' personal and highly sensitive data, e.g., clinical records, user profiles, biomedical images, etc. However, only a few scientific studies on preserving privacy in deep learning have been conducted. In this paper, we focus on developing a private convolutional deep belief network (pCDBN), which essentially is a convolutional deep belief network (CDBN) under differential privacy. Our main idea of enforcing $\epsilon$-differential privacy is to leverage the functional mechanism to perturb the energy-based objective functions of traditional CDBNs, rather than their results.

  • Ontology-based deep learning for human behavior prediction with explanations in health social networks
  • Human behavior modeling is a key component in application domains such as healthcare and social behavior research. In addition to accurate prediction, having the capacity to understand the roles of human behavior determinants and to provide explanations for the predicted behaviors is also important. Having this capacity increases trust in the systems and the likelihood that the systems actually will be adopted, thus driving engagement and loyalty. However, most prediction models do not provide explanations for the behaviors they predict.

  • Enabling Real-Time Drug Abuse Detection in Tweets
    ?Prescription drug abuse is one of the fastest growing public health problems in the USA. To address this epidemic, a near real-time monitoring strategy, instead of one resorting to a retrospective health records, may improve detecting the prevalence and patterns of abuse of both illegal drugs and prescription medications. In this paper, our primary goals are to demonstrate the possibility of utilizing social media, e.g., Twitter, for automatic monitoring of illegal drug and prescription medication abuse. We use machine learning methods for an automatic classification that can identify tweets that are indicative of drug abuse. We collected tweets associated with well-known illegal and prescription drugs. We manually annotated 300 tweets that are likely to be related to drug abuse.

  • Xiang Ji, Soon Ae Chun, Paolo Cappellari, James Geller: "A Framework for Linking Data Sources and Providing Intelligence in Social Health Analytics." Journal of Information Science, Submitted for Publication, May 2015, pp. 22

  • Xiang Ji, Paolo Cappellari, Soon Ae Chun, James Geller (2015): "Leveraging Social Data for Health Care Behavior Analytics." International Conference on Web Engineering, 667-670. Rotterdam, Netherland.