“Distributional Representation of Complex Semantics” by Kuansan Wang

Although heterogenous information networks (HINs) have been widely used to model complex systems, in many real life applications HINs are insufficient to represent all the important activities taking place in these systems. This is often due to the fact that many semantically salient interactions among the constituents of the systems are mostly unstructured in nature and are too awkward to be captured and encoded in discrete nodes and links. Using scholarly communications as an example where there are plenty of natural language contents in addition to entities and relations that can be modeled by a HIN, this talk describes an audacious attempt where both network topology and the non-structured contents are embedded into a continuous space in a jointly optimized and, most importantly, analytically tractable manner. We show that many natural language embedding and network embedding techniques are indeed an approximation of a more general framework that can be elegantly derived by extending the distributional hypothesis proposed in 1950s and can be implemented and computed using back propagation algorithm in a highly parallelizable and scalable fashion. Some early results deployed in Microsoft Academic will also be discussed.

Kuansan Wang is Managing Director and a Principal Researcher from Microsoft Research Outreach in Redmond, WA. He joined Microsoft Research in 1998, first as a researcher in the Speech Technology Group working on multimodal dialog system, then as an architect that designed and shipped various speech products, including the Voice Command on mobile that eventually becomes Cortana, and Microsoft Speech Server that is still powering Microsoft and partners’ call centers. In 2007, he rejoined Microsoft Research to work on large scale natural language understanding and web search technologies, and is currently responsible for running the largest machine reading effort that uses intelligent agents to dynamically acquire knowledge from the web and make it available to the general public. Kuansan received his BS from National Taiwan University and MS and PhD from University of Maryland, College Park, respectively, all in Electrical Engineering. In addition to 120+ scholarly papers and 40+ patents he has published, his work has also been adopted into 10 international standards from W3C, Ecma and ISO.

“Learning New Type Representations from Knowledge Graphs” by Soumen Chakrabarti

Beyond words, continuous representations of entities and relations have led to large recent improvements in inference of facts in knowledge bases, as well as applications like question answering. Comparatively less has been done about modeling types and their associated relations (is-instance-of and is-subtype-of). In the first part of the talk, I will present a new representation of types as hyper-rectangles rather than points, which are commonly used to embed words and entities. I will propose an elementary loss function representing rectangle containment. I will also demonstrate that recent work on type representation has used a questionable evaluation protocol, and propose a sound alternative. Experiments using type supervision from the WordNet noun hierarchy show the superiority of our approach. In the second part of the talk, I will move to unsupervised discovery of type representation. The idea is to represent each entity using a type and a residual vector. Each relation is represented by two type-checking vectors and an entity-to-entity compatibility checking vector. We do not use any supervision from KG schema to guide the type (checking) embeddings. Experiments on FB15k and YAGO show two benefits. First, inferring new triples becomes more accurate, exceeding state of the art. Second, the type embeddings are very good predictors of KG types to which the entities belong, although this information was not available during training.

Soumen Chakrabarti a Professor of Computer Science at IIT Bombay. He got his PhD from University of California, Berkeley and worked on Clever Web search and Focused Crawling at IBM Almaden Research Center. He has also worked at Carnegie-Mellon University and Google. He works on linking unstructured text to knowledge bases and exploiting these links for better search and ranking. Other interests include link formation and influence propagation in social networks, and personalized proximity search in graphs. He has published extensively in WWW, SIGKDD, EMNLP, VLDB, SIGIR, ICDE and other conferences. His work on keyword search in databases got the 10-year influential paper award at ICDE 2012. He is also the author of one of the earliest books on Web search and mining.

“Project Alexandria – In Pursuit of Commonsense AI” by Scott Yih

Enabling machines with commonsense, the knowledge that virtually every person has, is an important quest towards artificial general intelligence. In this talk, I’m going to introduce the new initiative on commonsense AI, Project Alexandria, at Allen Institute for Artificial Intelligence (AI2). I will first describe briefly the vision of this project and review some of the past research efforts on commonsense knowledge representation and reasoning, explaining why it is a difficult problem. In order to encourage the community to make progress on commonsense AI, our focus in the first year of Project Alexandria is to create a large-scale benchmark dataset. I will talk about our latest work on producing natural commonsense questions by pairing crowd workers to play games, and share some of the lessons we learned.

Scott Wen-tau Yih a Principal Research Scientist at Allen Institute for Artificial Intelligence (AI2). His research interests include natural language processing, machine learning and information retrieval. Yih received his Ph.D. in computer science at the University of Illinois at Urbana-Champaign. His work on joint inference using integer linear programming (ILP) has been widely adopted in the NLP community for numerous structured prediction problems. Prior to joining AI2, Yih has spent 12 years at Microsoft Research, working on a variety of projects including email spam filtering, keyword extraction and search & ad relevance. His recent work focuses on continuous representations and neural network models, with applications in knowledge base embedding, semantic parsing and question answering. Yih received the best paper award from CoNLL-2011, an outstanding paper award from ACL-2015 and has served as area co-chairs (HLT-NAACL-12, ACL-14, EMNLP-16,17,18), program co-chairs (CEAS-09, CoNLL-14) and action/associated editors (TACL, JAIR) in recent years. He is also a co-presenter for several tutorials on topics including Semantic Role Labeling (NAACL-HLT-06, AAAI-07), Deep Learning for NLP (SLT-14, NAACL-HLT-15, IJCAI-16), NLP for Precision Medicine (ACL-17, AAAI-18).

“Open Knowledge Network” by Chaitan Baru

Knowledge networks that encode information and knowledge about real-world entities and their relationships provide a key, enabling semantic information infrastructure for next generation, artificial intelligence-based technologies and applications. An Open Knowledge Network (OKN) effort would help create a common semantic information infrastructure to boost the next generation of data-enabled machine learning and artificial intelligence applications. Such an open network could be formed by utilizing the data and information for an initial set of science and engineering domains, as well as other domains of interest, driven by a set of significant, well-defined questions addressing scientific challenges and societal problems. This talk will present results from a community workshop on this topic which was held in October 2017 at the National Library of Medicine ( Application domains discussed at the workshop included geosciences, biomedicine, finance, and smart manufacturing. Workshop participants were from industry, academia and government agencies.

Chaitan Baru is Senior Advisor for Data Science in the Computer and Information Science & Engineering Directorate at the National Science Foundation, Alexandria, VA, where he co-chairs the NSF Harnessing the Data Revolution Big Idea working group, and has responsibility for the cross-Foundation BIGDATA research program. He is advisor to the NSF Big Data Regional Innovation Hubs and Spokes program (BD Hubs/Spokes) and was engaged in the development of the NSF Transdisciplinary Research in Principles of Data Science (TRIPODS) program. He also co-chairs the Big Data Interagency Working Group—which is part of the Networking and IT R&D program of the National Coordination Office, White House Office of Science and Technology Policy—and is a primary co-author of the Federal Big Data R&D Strategic Plan (released May 2016). Dr. Baru is on assignment at NSF from the San Diego Supercomputer Center (SDSC), University of California San Diego, where he is a Distinguished Scientist and Director of the Advanced Cyberinfrastructure Development Group ( and the Center for Large-scale Data Systems Research (