Plenary Lectures

By alphabetical order (last name) 

Prof. Fujita

Seeing the world in 3D with two eyes
and two cortical pathways
 

Prof. Ichiro Fujita

Graduate School of Frontier Biosciences
Osaka University, Japan

Abstract: 
  Close one eye and look around you. Then open that eye. You immediately notice that the visual world has vivid depth when viewed with both eyes. The human brain can derive 3-dimensional depth structure of objects and surfaces from two flat images on the two retinae. This magnificent, mysterious, yet daily experienced ability is dubbed stereopsis, and has long attracted both scientists and non-scientists.
  Stereopsis allows a highly quantitative and sensitive detection of depth compared to depth perception based on monocular cues (e.g., occlusion, shading, texture gradients, etc.). Due to the horizontal displacement from each other, our two eyes view the world from slightly different vantage points. This causes a tiny positional difference in the visual images in the two eyes. The brain exploits this small difference, or binocular disparity, to compute depth.

  The neural processing for binocular depth perception starts in the primary visual cortex (V1) where signals from two eyes converge onto single neurons for the first time in the visual pathway. V1 neurons detect binocular disparity through a computation similar to cross-correlation between the left and right retinal images. However, this does not necessarily mean that V1 neurons directly underlie stereoscopic depth perception. In fact, properties of V1 cells do not account for a number of aspects of stereopsis, suggesting that subsequent processing in cortical areas beyond V1 is responsible for conscious perception of stereoscopic depth. It has long been believed that binocular disparity information is processed along the visual pathway projecting from V1 to the parietal cortex (dorsal pathway). Studies from our and other laboratories in the past 15 years, however, have revealed that binocular disparity signals are processed both along the dorsal pathway and the pathway projecting from V1 to the temporal cortex (ventral pathway). We thus use the two major cortical pathways for stereopsis. Why?

  Recent studies are now answering this question; the two cortical pathways contribute to stereopsis in a different manner. Neurons in the ventral pathway areas solve the binocular correspondence problem, compute relative disparity between visual features, and exhibit activities correlated with behavioral judgment of fine disparity discrimination. Those in the dorsal pathway areas encode binocular correlation, signal local absolute-disparity, and are involved in judgment of coarse disparity and in control of vergence angle.

Biography: 
  Ichiro Fujita received his Bachelor degree (1979) from University of Tokyo, and his M.Sc. degree (1981) and Ph.D. (1984) from University of Tokyo Graduate School. He then received his postdoctoral training at National Institute for Physiological Sciences (Okazaki, Japan), California Institute of Technology (Pasadena, U.S.A.) and RIKEN (Wako, Japan). In 1994, he became a professor at Osaka University Medical School and started his Laboratory of Cognitive Neuroscience. He is currently a professor at Osaka University Graduate School of Frontier Biosciences, joint-appointed at Center for the Study of Communication Design, Research Center for Behavioral Economics, and Center for Information and Neural Networks. He has been studying the neural mechanism of vision, particularly object recognition and binocular depth perception (stereopsis). 

 

Prof. Lee

Locality Sensitive Hashing Techniques
for Big Data Processing
 

Prof. Keon Myung Lee

Dept. of Computer Science,
Chungbuk National University, Korea

Abstract: 
  Big data offers new opportunities and challenges for business and industry. Volumes, varieties, velocities of data are major issues in big data processing. Especially, processing speed could be a key bottleneck in big data-based information services. Here I would give a general overview of locality sensitive hashing techniques for big data processing. The techniques have been studied to efficiently handle the nearest neighbor search and similar pair identification problems for big data as in information/image retrieval, duplicate search, collaborative filtering-based recommendation, pattern recognition, and so on. Basically, hashing is a technique to directly locate data in the table with simple computation. Along with the inherent characteristics of hashing, locality sensitive hashing tries to make similar data fall into the same buckets. We start with the conventional approaches, and then meet some typical methods for locality sensitive hashing. After that we address a special type of locality sensitive hashing, called semantic hashing, which produces binary codewords as hash values so that short Hamming distance between codewords implies close similarity. We glimpse at how and which locality sensitive hashing techniques deal with numeric, set-valued, or categorical data. We also discuss how to incorporate soft computing into locality sensitive hashing.

Biography: 
  He received his BS, MS, and Ph.D. degrees in computer science from KAIST(Korea Institute of Science and Technology), Korea and was a Post-doc fellow in INSA de Lyon, France. He was a visiting professor in University of Colorado at Denver and a visiting scholar in Indiana University, USA. After having an industrial career at Silicon Valley, USA, he joined Dept. of Computer Science, Chungbuk National University, Korea in 1995. Now he is a professor and the director for Research Institute of Computer and Information Communication, established at the University. He serves as the Editor-in-Chief of International Journal of Fuzzy Logic and Intelligent Systems. His principal research interests are in data mining, machine learning, soft computing, big data processing, and intelligent service systems.

 

Prof. Reformat 

Fuzziness and Information Processing
in Semantic Web
 

Prof. Marek Reformat

Dept. of Electrical and Computer Engineering, 
University of Alberta, Canada

Abstract: 
  There is no doubt the importance of the Internet is growing. It is not only due to a rising amount of information stored on it, but also to the growing involvement of users. Increased activities and freedom in posting information on the web lead to potential problems related to reliability of information, its ambiguity, and correctness. At the same time, users have high expectations what the web suppose to provide, while their requests and queries are not very precise. As a result, uncertainty becomes a persistent aspect of the web’s contents, as well as users’ interactions with the web.
Yet, these uncertainties can be turned into an advantage. Imprecision can be seen as inspiration to acquire more information about a given subject, and can lead to uncovering new information not even requested by the user. Also, it can be used in evaluating – from the point of view of confidence and compatibility – information known to the user.

  The presentation will focus on application of fuzziness to processes of building and maintaining personal knowledge repositories based on information found on the web. In particular, a method of determining similarity between concepts will be demonstrated. The method can evaluate similarity in specific contexts, and take into account importance of concept features. Further, an approach suitable for assimilating information from the web will be introduced. The process incorporates aspects of confidence in already known information, and its conformance to information experienced on the web.
All these processes are deployed in the environment of Reference Description Framework (RDF). RDF is a Semantic Web data representation format, and its popularity has grown enormously in the last few years.

Biography:
  Marek Reformat received his M.Sc. degree (with honors) from Technical University of Poznan, Poland, and his Ph.D. from University of Manitoba, Canada. Presently, he is a professor with the Department of Electrical and Computer Engineering, University of Alberta. The goal of his research activities is to develop methods and techniques for intelligent data modeling and analysis leading to translation of data into knowledge, as well as to design systems that possess abilities to imitate different aspects of human behavior. In this context, the concepts of Computational Intelligence – with fuzzy computing and possibility theory in particular – are key elements necessary for capturing relationships between pieces of data and knowledge, and for mimicking human ways of reasoning about opinions and facts. Dr. Reformat also works on Computational Intelligence based approaches for dealing with information stored on the web. He applies elements of fuzzy sets to social networks, linked data, and Semantic Web in order to handle inherently imprecise information, and provide users with unique facts retrieved from the data. All his activities focus on introduction of human aspects to web and software systems what will lead to the development of more human-aware and human-like systems. Dr. Reformat has been a member of program committees of almost 60 international conferences related to Computational Intelligence and Software Engineering. He is actively involved in North American Fuzzy Information Processing Society (NAFIPS); he is a member of the IEEE and ACM.

 

Soft Computing for Social Networks
and Information Sharing

 

Prof. Ronald R. Yager

Machine Intelligence Institute, 
Iona College, USA

Abstract: 
  Computer mediated social networks are now an important technology for world-wide communication, interconnection and information sharing.   Our goal here is to enrich the domain of social network modeling by introducing ideas from fuzzy sets and related granular computing technologies.  We approach this extension in a number of ways.  One is with the introduction of fuzzy graphs representing the networks.  This allows a generalization of the types of connection between nodes in a network from simply connected or not to weighted or fuzzy connections.  A second and perhaps more interesting extension is the use of the fuzzy set based paradigm of computing with words to provide a bridge between a human network analyst's linguistic description of social network concepts and the formal model of the network.  We also will describe some methods for sharing information obtained in these types of networks.  In particular we discuss linguistic summarization and tagging methods

Biography:
  Ronald R. Yager is Director of the Machine Intelligence Institute and Professor of Information Systems at Iona College.  He is editor and chief of the International Journal of Intelligent Systems. He has published over 500 papers and fifteen books in areas related to fuzzy sets, human behavioral modeling, decision-making under uncertainty and the fusion of information.  He is among the world’s top 1% most highly cited researchers with over 27000 citations.  He was the recipient of the IEEE Computational Intelligence Society Pioneer award in Fuzzy Systems. He received the special honorary medal of the 50-th Anniversary of the Polish Academy of Sciences.  He received the Lifetime Outstanding Achievement Award from International the Fuzzy Systems Association.  He received an honorary doctorate degree, honoris causa, from the State University of Information Technologies, Sofia Bulgaria.  Dr. Yager is a fellow of the IEEE, the New York Academy of Sciences and the Fuzzy Systems Association.  He has served at the National Science Foundation as program director in the Information Sciences program.  He was a NASA/Stanford visiting fellow and a research associate at the University of California, Berkeley.  He has been a lecturer at NATO Advanced Study Institutes.  He is a visiting distinguished scientist at King Saud University, Riyadh Saudi Arabia.  He is a distinguished honorary professor at the Aalborg University Denmark.  He received his undergraduate degree from the City College of New York and his Ph. D. from the Polytechnic University of New York.