WI09 main

important dates

author kit



call for papers

papers submission


conference organization

program committee


call for workshop proposal

call for tutorial


industry/ demo

invited speakers


final program

conference venue



Profile: Stefano Ceri
Stefano Ceri is professor of Database Systems at the Dipartimento di Elettronica e Informazione (DEI), Politecnico di Milano; he was visiting professor at the Computer Science Department of Stanford University between 1983 and 1990. He is vice-chairman of Alta Scuola Politecnica, a school of excellence for master-level students which is jointly organized by Politecnico di Milano and Politecnico di Torino. He is an associated editor of several international journals, co-editor in chief of the book series "Data Centric Systems and Applications" (Springer-Verlag), author of over 250 articles on International Journals and Conference Proceedings, and co-author of nine international books.
His research interests are focused on extending database technology to incorporate data distribution, deductive and active rules, object orientation, and XML query languages, as well as on design methods for data-intensive WEB sites, stream reasoning, and search computing. He is co-inventor of WebML, a model for the conceptual design of Web applications, and co-founder of Web Models, a startup of Politecnico di Milano focused on WebML commercialization by means of the product WebRatio. He has been responsible of several EU-Funded Projects projects, including being awarded in July 2008 an IDEAS Advanced Grant, funded by the European Research Council (ERC), on "Search Computing" (2008-2013). top

Abstract: Search Computing
"Who are the strongest European competitors on software ideas? Who is the best doctor to cure insomnia in a nearby hospital? Where can I attend an interesting conference in my field closest to a sunny beach?" This information is available on the Web, but no software system can accept such queries nor compute the answer. We hereby propose search computing as a new multi-disciplinary science which will provide the abstractions, foundations, methods, and tools required to answer these and many similar queries. While state-of-art search systems answer generic or domain-specific queries, search computing enables answering questions via a constellation of dynamically selected, cooperating, search services. Search computing requires innovation in software principles, languages, interfaces, and protocols, as well as contributions from other sciences such as mathematics, operations research, psychology, sociology, economical and legal sciences. top

Profile: Marco Dorigo
Marco Dorigo received the Laurea (Master of Technology) degree in industrial technologies engineering in 1986 and the doctoral degree in information and systems electronic engineering in 1992 from Politecnico di Milano, Milan, Italy, and the title of Agrégé de l'Enseignement Supérieur, from the Université Libre de Bruxelles, Belgium, in 1995.
From 1992 to 1993 he was a Research Fellow at the International Computer Science Institute of Berkeley, CA. In 1993 he was a NATO-CNR Fellow, and from 1994 to 1996 a Marie Curie Fellow. Since 1996 he has been a tenured researcher of the FNRS, the Belgian National Funds for Scientific Research, and a Research Director of IRIDIA, the artificial intelligence laboratory of the Universitè Libre de Bruxelles. He is the inventor of the ant colony optimization metaheuristic. His current research interests include swarm intelligence, swarm robotics, and metaheuristics for discrete optimization.
Dr. Dorigo is the Editor-in-Chief of the Swarm Intelligence journal, and an Associate Editor or member of the editorial board for many journals in computational intelligence and adaptive systems among which the IEEE Transactions on Systems, Man, and Cybernetics, the IEEE Transactions on Evolutionary Computation, the IEEE Transactions on Autonomous Mental Development, and the ACM Transactions on Adaptive and Autonomous Systems.
Dr. Dorigo was awarded the Italian Prize for Artificial Intelligence in 1996, the Marie Curie Excellence Award in 2003, the Dr A.De Leeuw-Damry-Bourlart award in applied sciences in 2005 and the Cajastur International Prize for Soft Computing in 2007.
He is a fellow of the IEEE and of ECCAI. top

Abstract: Swarm-bots and Swarmanoid: Two experiments in embodied swarm intelligence
Swarm intelligence is the discipline that deals with natural and artificial systems composed of many individuals that coordinate using decentralized control and self-organization. In particular, it focuses on the collective behaviors that result from the local interactions of the individuals with each
other and with their environment. The characterizing property of a swarm intelligence system is its ability to act in a coordinated way without the presence of a coordinator or of an external controller.
Swarm robotics could be defined as the application of swarm intelligence principles to the control of groups of robots.
In this talk I will discuss results of Swarm-bots, an experiment in swarm robotics. A swarm-bot is an artifact composed of a swarm of assembled s-bots. The s-bots are mobile robots capable of connecting to, and disconnecting from, other s-bots. In the swarm-bot form, the s-bots are attached to each other and, when needed, become a single robotic system that can move and change its shape.
S-bots have relatively simple sensors and motors and limited computational capabilities. A swarm-bot can solve problems that cannot be solved by s-bots alone.
In the talk, I will shortly describe the s-bots hardware and the methodology we followed to develop algorithms for their control. Then I will focus on the capabilities of the swarm-bot robotic system by showing video recordings of some of the many experiments we performed to study coordinated movement, path formation, self-assembly, collective transport, shape formation, and other collective behaviors.
I will conclude presenting initial results of the Swarmanoid experiment, an extension of swarm-bot to 3-dimensional environments. top

Profile: Ronald R. Yager
Ronald R. Yager is Director of the Machine Intelligence Institute and Professor of Information Systems at Iona College. He is editor and chief of the International Journal of Intelligent Systems. He has worked in the area of machine intelligence and decision making under uncertainty for over twenty-five years. He has published over 500 papers and fifteen books. He is among the world's top 1% most highly cited researchers with over 7000 citations. He was the recipient of the IEEE Computational Intelligence Society Pioneer award in Fuzzy Systems. Dr. Yager is a fellow of the IEEE, the New York Academy of Sciences and the Fuzzy Systems Association. He was given a lifetime achievement award by the Polish Academy of Sciences. He served at the National Science Foundation as program director in the Information Sciences program. He was a NASA/Stanford visiting fellow and a research associate at the University of California, Berkeley. He has been a lecturer at NATO Advanced Study Institutes. He has been a distinguished honorary professor at the Aalborg University Esbjerg Denmark. He is an affiliated distinguished researcher at the European Centre for Soft Computing. He serves on the editorial board of numerous technology journals. top

Abstract: Intelligent Social Network Modeling
The recent development of Web 2.0 has provided an enormous increase in human interactions across all corners of the earth. One manifestation of this is the growth of computer mediated social networks. Many notable Web 2.0 applications such as Facebook, Myspace and LinkedIn are social networks. Relational networks are becoming an important technology for modeling these types of social networks and the type of collaborative intelligence that arises from these interactions. Our goal here is to enrich the domain of social network modeling by introducing ideas from fuzzy sets and related granular computing technologies to provide a bridge between a human network analyst's linguistic description of social network concepts and the formal model of the

Profile: Yulin Qin
Yulin Qin is a professor at International WIC Institute (WICI) at Beijing University of Technology, and a senior research psychologist in the department of psychology, Carnegie Mellon University. Professor Qin received M.E. in computer science and engineering from Beijing University of Aeronautics and Astronautics, and Ph.D. in cognitive psychology at Carnegie Mellon University. His research interests include cognitive psychology, cognitive neuroscience and Web Intelligence, and currently focus on the neural basis of ACT-R, a computational cognitive model, and its relation with Web Intelligence. top

Abstract: Various Levels from Brain Informatics to Web Intelligence
In the early stage of artificial intelligence (AI), AI very closed to then modern cognitive psychology based on the recognition that both computer and human brain are information processing machines meeting the requirements to show intelligence. It seems that the similar trend appears again today between Web Intelligence (WI) and Brain Informatics (BI) based on the recognition that both World Wide Web (the Web) and the human brain are informational huge open systems meeting the requirements to deal with scalable, dynamically changing, distributed, incomplete and inconsistent information, and the advancement both in the Web (e.g., semantic Web and human-level wisdom-Web computing) and in BI (e.g., advanced information technologies for brain science and non-invasive neuroimaging technologies, such as functional magnetic resonance imaging (fMRI)). ACT-R is a theory and model of computational cognitive architecture which consists of functional modules, such as declarative knowledge module, procedural knowledge module, goal module and input (visual, aural), output (motor, verbal) modules. Information can be proposed parallel inside and among the modules, but has to be sequentially if it needs procedural module to coordinate the behavior across modules. At the International WIC Institute (WICI), we are trying to introduce this kind of architecture and the mechanism of activation of the units in declarative knowledge module into our wisdom-Web computing system. Based on or related to ACT-R, theories and models that are with very close relation to WI have also been developed, such as threaded cognition for concurrent multitasking, cognitive agents, human-Web interaction (e.g., SNIT-ACT (Scent-based navigation and information foraging in the ACT cognitive architecture). At the WICI, we are also working on the user behavior and reasoning on the Web by eye-tracker and fMRI. Human can perceive the real world under many levels of granularity (i.e., abstraction) and can also easily switch among granularities. By focusing on different levels of granularity, one can obtain different levels of knowledge, as well as in-depth understanding of the inherent knowledge structure. At the WICI, we are taking Granular Reasoning (GrR) as a human intelligence inspired methodology and developing specific methods for a reasoning process in a variable precision at Web scale. All of above will be discussed in my talk as examples of various levels from BI to WI to show the trend of close interacting between BI and WI, whcih will benefit both WI and BI

Profile: Katia Sycara
Katia Sycara is a  Professor in the School of Computer Science at Carnegie Mellon University and holds the Sixth Century Chair in Computing Science at the University of Aberdeen in the U.K. She is the Director of the Laboratory for Agents Technology and Semantic Web Technologies. She holds a B.S in Applied Mathematics from Brown University, M.S. in Electrical Engineering from the University of Wisconsin and PhD in Computer Science from Georgia Institute of Technology.  She holds an Honorary Doctorate from the University of the Aegean (2004). She  is a Fellow of the Institute of Electrical and Electronic Engineers (IEEE), Fellow of the American Association for Artificial Intelligence (AAAI) and the recipient of the 2002 ACM/SIGART Agents Research Award.  She is a member of the Scientific Advisory Board of France Telecom. Prof. Sycara has given numerous invited talks, and has authored or co-authored more than 350 technical papers dealing with Multiagent Systems, Agents Supporting Human Teams, Multi-Agent Learning, Sensor Networks, Web Services, the Semantic Web, Human-Agent Interaction, Negotiation, Case-Based Reasoning and numerous application of these techniques.  Prof. Sycara has served as the program co-chair of the International conference on Service Oriented Computing and Applications (SOCASE 2007), program co-chair of  the 6th IEEE/ACM conference on Intelligent Agent Technology (IAT 2006),  program chair of the Second International Semantic Web Conference (ISWC 2003), as general chair of the Second International Conference on Autonomous Agents (Agents 98), as the chair of the Steering Committee of the Agents Conference (1999-2001), as the Scholarship chair of AAAI (1993-1999) and as a member of the AAAI Executive Council (1996-99).  She is a founding member and member of the Board of Directors of the International Foundation of Multiagent Systems (IFMAS); founding member of the Semantic Web Science Association. She is a founder of the journal “Autonomous Agents and Multiagent Systems” , serving as Editor in Chief from 1998-2007, and on the editorial board of 7 other journals. Her project website is: top

Abstract: Agent Based Aiding of Human Teams
Teams are a form of organizational structure where the team members engage in information exchanges in order to fulfill team goals. The activities that the team engages in are inter-dependent and usually involve  gathering, interpreting and exchanging information; creating and identifying alternative courses of action; choosing among alternatives by considering different viewpoints of team members; choosing among decision alternatives and monitoring the consequences of the decision. Effective teams achieve goals and accomplish tasks that otherwise would not be achievable by groups of uncoordinated individuals. While previous work in teamwork theory has focused on describing ways in which humans coordinate their activities, there has been little previous work on which of those specific activities, information flows and team performance can be enhanced by being aided by software agents. Recent interest in supporting emergency response teams, military interest in operations other than war, and coalition operations, motivates the need for studies that examine agent aiding strategies and their effect on human team performance.
This talk will present (a) characteristics and challenges of human teamwork that have not been well studied to date, such as decentralization and self-organization, (b) results of studies of human-only teamwork performance that incorporate these challenges in order to establish a baseline, and (c) identification of fruitful ways for agents to aid human teams with these characteristics.  In particular, we will focus on teams that operate in time stressed environments without previous training together. We will also present results of studies where software agents provided decision support for human teams in the performance of a variety of tasks and under different environmental and task constraints. We will close with open challenges and research problems in agent aiding of human teamwork.

Profile: Bhavani Thuraisingham
Bhavani Thuraisingham joined The University of Texas at Dallas in October 2004 as a Professor of Computer Science and Director of the Cyber Security Research Center in the Erik Jonsson School of Engineering and Computer Science. She is an elected Fellow of three professional organizations: the IEEE (Institute for Electrical and Electronics Engineers), the AAAS (American Association for the Advancement of Science) and the BCS (British Computer Society) for her work in data security. She received the IEEE Computer Society’s prestigious 1997 Technical Achievement Award for “outstanding and innovative contributions to secure data management.”
Dr Thuraisingham’s work in information security and information management has resulted in over 80 journal articles, over 200 refereed conference papers and workshops, and three US patents. She is the author of nine books in data management, data mining and data security including one on data mining for counter-terrorism and another on Database and Applications Security and is completing her tenth book on Secure Service Oriented Information Systems. She has given over 60 keynote presentations at various technical conferences and has also given invited talks at the White House Office of Science and Technology Policy and at the United Nations on Data Mining for counter-terrorism. She serves (or has served) on editorial boards of leading research and industry journals and was the Editor in Chief of Computer Standards and Interfaces Journal. She is also an Instructor at AFCEA’s (Armed Forces Communications and Electronics Association) Professional Development Center and has served on panels for the Air Force Scientific Advisory Board and the National Academy of Sciences.
Dr Thuraisingham is the Founding President of “Bhavani Security Consulting” - a company providing services in consulting and training in Cyber Security and Information Technology Prior to joining UTD, Thuraisingham was an IPA (Intergovernmental Personnel Act) at the National Science Foundation from the MITRE Corporation. At NSF she established the Data and Applications Security Program and co-founded the Cyber Trust theme and was involved in inter-agency activities in data mining for counter-terrorism. She has been at MITRE since January 1989 and has worked in MITRE's Information Security Center and was later a department head in Data and Information Management as well as Chief Scientist in Data Management. She has served as an expert consultant in information security and data management to the Department of Defense, the Department of Treasury and the Intelligence Community for over 10 years. Thuraisingham’s industry experience includes six years of research and development at Control Data Corporation and Honeywell Inc.
Thuraisingham was educated in the United Kingdom both at the University of Bristol and at the University of Wales.

Abstract: Data Mining for Malicious Code Detection and Security Applications
Data mining is the process of posing queries and extracting patterns, often previously unknown from large quantities of data using pattern matching or other reasoning techniques. Data mining has many applications in security including for national security as well as for cyber security. The threats to national security include attacking buildings, destroying critical infrastructures such as power grids and telecommunication systems. Data mining techniques are being investigated to find out who the suspicious people are and who is capable of carrying out terrorist activities. Cyber security is involved with protecting the computer and network systems against corruption due to Trojan horses, worms and viruses. Data mining is also being applied to provide solutions such as intrusion detection and auditing.
The first part of the presentation will discuss my joint research with Prof. Latifur Khan and our students at the University of Texas at Dallas on data mining for cyber security applications For example; anomaly detection techniques could be used to detect unusual patterns and behaviors. Link analysis may be used to trace the viruses to the perpetrators. Classification may be used to group various cyber attacks and then use the profiles to detect an attack when it occurs. Prediction may be used to determine potential future attacks depending in a way on information learnt about terrorists through email and phone conversations. Data mining is also being applied for intrusion detection and auditing. Other applications include data mining for malicious code detection such as worm detection and managing firewall policies.
This second part of the presentation will discuss the various types of threats to national security and describe data mining techniques for handling such threats. Threats include non real-time threats and real-time threats. We need to understand the types of threats and also gather good data to carry out mining and obtain useful results. The challenge is to reduce false positives and false negatives. The third part of the presentation will discuss some of the research challenges. We need some form of real-time data mining, that is, the results have to be generated in real-time, we also need to build models in real-time for real-time intrusion detection. Data mining is also being applied for credit card fraud detection and biometrics related applications. While some progress has been made on topics such as stream data mining, there is still a lot of work to be done here. Another challenge is to mine multimedia data including surveillance video. Finally, we need to maintain the privacy of individuals. Much research has been carried out on privacy preserving data mining. In summary, the presentation will provide an overview of data mining, the various types of threats and then discuss the applications of data mining for malicious code detection, cyber security and national security. Then we will discuss the consequences to privacy.

Profile: Chengqi Zhang
Chengqi Zhang has been a Research Professor of Information Technology at the University of Technology, Sydney (UTS), Australia since December 2001. He is currently the Director of the UTS Priority Research Centre for Quantum Computation and Intelligent Systems (QCIS). He has also been the Chairperson of the Australian Computer Society’s National Committee for Artificial Intelligence since 2005 and the Leader of the Data Mining program at the Australian Capital Market Cooperative Research Centre since 2002. Chengqi Zhang obtained his PhD degree from Queensland University in 1991 and Doctor of Science (DSc) from Deakin University in 2002.
Prof. Zhang’s research interests include “Multi-Agent Systems”, “Data Mining”, and their integrations. He has published more than 200 research papers in these research areas. His most notable paper was published in “Artificial Intelligence” in 1992 – the most prestigious Journal in Artificial Intelligence field. He has also published many papers in first class international journals, such as IEEE and ACM Transactions. He has led his research team to attract more than $2 million in research grants from the Australian Research Council. He has been invited to present ten keynote/invited speeches in international conferences and workshops.
Prof. Zhang has been actively serving professional communities. He has been the Associate Editor for several international journals, including IEEE Transactions on Knowledge and Data Engineering. He has been the Chair of the Steering Committee for the International Conference on Knowledge Science, Engineering, and Management since 2006. He was the General Co-chair of WI-IAT 2008. As a visiting scholar or a visiting professor, he visited the University of Massachusetts for six months in 1993, Carnegie Mellon University for three months in 1995, London University for six months in 1996, Chinese University of Hong Kong for six months in 2003, and City University of Hong Kong for six months in 2007. More detailed information can be found on his homepage at

Abstract: Developing Actionable Trading Strategies for Trading Agents
Trading agents are useful for developing and back-testing quality trading strategies for taking actions in the real world. The existing trading agent research mainly focuses on simulation using artificial data. As a result, the actionable capability of developed trading strategies is often limited, and the trading agents therefore lack power. Actionable trading strategies can empower trading agents with workable decision-making in real-life markets. The development of actionable strategies is a non-trivial task, which needs to consider real-life constraints and organisational factors in the market. In this talk, we first analyse such constraints on developing actionable trading strategies for trading agents and propose a trading strategy development framework for trading agents. We then develop a series of trading strategies for trading agents through optimising, enhancing and discovering actionable trading strategies. We demonstrate working case studies using agent mining technology in real market data. These approaches, and their performance, are evaluated from both technical and business perspectives. These evalualtions clearly show that the development of trading strategies for trading agents, using our approach, can lead to smart decisions for brokerage firms and financial companies. top