Jornadas de Ingeniería del Software y Bases de Datos   
VIII Jornadas de Ingeniería del Software y Bases de Datos
12-14 Noviembre 2003
Artículos aceptados
Tutoriales aceptados
Talleres aceptados
Lugar de celebración
Ponencias invitadas
Programa social
Ediciones anteriores
Programa: Ponencias invitadas

Se han programado cinco ponencias invitadas que tendrán lugar los días 12, 13 y 14 de noviembre: Nota: todas las ponencias serán en inglés.

Keynote 1: Prof. David Parnas
Using Traditional Mathematics for Precise Specification and Description
Many of the problems encountered by software developers (and consequently by users) can be attributed to the lack of precise documentation of requirements, component interfaces, and programming design decisions. For decades, researchers have proposed ameliorating these problems by means of various types of pseudo-codes, sometimes described by the oxymoron "executable specifications". Such "models" inevitably show one way to get the desired result rather than characterise the set of acceptable results. Almost always, these descriptions include information that cannot be verified by a user and thereby suggest an implementation approach for the developer. This talk will show that, with the help of tabular notation for writing mathematical expressions, we can produce readable documents that do nothing other than characterise the acceptable behaviours. We will also discuss tools that can perform important checks on the documents and even allow simulators to be generated. The tools allow testers to be involved earlier in the software development cycle and empower them to test the intended functionality of software and then test the software to see if it satisfies the specification.

Professor Parnas received his B.S., M.S. and Ph.D. in electrical engineering-systems and communications sciences from CMU, and honorary doctorates from the ETH in Zurich and the Catholic University of Louvain in Belgium. He is licensed as a Professional Engineer in the Province of Ontario. In 1979, he won a "Best Paper" Award from the Association for Computing Machinery (ACM), and twice he has won the "Most Influential Paper" awards at the International Conference on Software Engineering. In 1998, he won ACM SIGSOFT's "Outstanding Research Award". He is currently director of the Software Engineering Programme in the Faculty of Engineering's Computing and Software Department at McMaster University. He has also served as an advisor to diverse manufacturers and government organizations, and at the Naval Research Laboratory (NRL) in Washington, DC instigated the Software Cost Reduction (A-7) Project, which developed and applied software technology to aircraft weapons systems. He has also advised the Atomic Energy Control Board of Canada on the use of safety-critical real-time software at the Darlington Nuclear Generation Station.

Más información: Software Quality Research Laboratory (University of Limerick)

Transparencias (pdf)

Keynote 2: Prof. Yuri Gurevich
Executable Specifications: The Abstract State Machine Approach
Some people think that executable specification is a contradiction in terms. We think that executable specifications will change the way software is designed, developed, tested and documented. Our opinion is based on the theory of abstract state machines, international experimentation with ASMs, and the applied ASM work of the group on Foundations of Software Engineering in Microsoft Research.

Contrary to natural sciences, computer science is primarily about artificial reality, about computer systems. Mathematically speaking, what is a computer system? A computer system may have many meaningful levels of abstraction. Fix such a level. The ASM theory tells us that there is an abstract state machine that, behaviorally, is identical to our system on the chosen abstraction level. The specification language AsmL, developed by the FSE group, makes writing ASM models practical. Our tools allow the developers (more and more) to experiment with their design, validate it and enforce it. The tools allow testers to be involved earlier in the software development cycle and empower them to test the intended functionality of software (and not only its robustness)

Yuri Gurevich is Sr. Researcher at Microsoft Research in Redmond, WA. He is also Professor Emeritus at the University of Michigan, ACM Fellow, Guggenheim Fellow, and Dr. Honoris Causa of Limburg University in Belgium.

Más información: Yuri Gurevich (Microsoft Research)

Keynote 3: Prof. Michael Kifer
In Search of Semantics for the Semantic Web
It is hard to stay completely impervious to all the buzz about the Semantic Web these days. The idea is to extend the current HTML-based Web, intended for human consumption, with information intended for machine consumption. The overall vision is that software agents, such as work, study, shopping, and business assistants will turn the Web into a basic utility as pervasive as electricity and telephone. To make all this possible, machines need to "understand" the information they read, which brings us to semantics. In this talk I will survey some of the modeling techniques used in the Semantic Web research and practice, such as ontologies, rules, and constraints, and the underlying formal tools, including Description Logic and F-logic. I will also discuss the ideas underlying the emerging field of Semantic Web Services.

Michael Kifer is a Professor with the Department of Computer Science, State University of New York at Stony Brook (USA). He received his Ph.D. in Computer Science in 1985 from the Hebrew University of Jerusalem, Israel, and the M.S. degree in Mathematics in 1976 from Moscow University, Russia.
Dr. Kifer's interests include database systems, knowledge representation, and Web information systems. He has published two text books and numerous articles in these areas. In 1999 and 2002 he was a recipient of the ACM-SIGMOD "Test of Time" awards for his works on F-logic and object-oriented database languages.

Mas información: Michael Kifer (Department of Computer Science, University at Stony Brook)

Keynote 4: Prof. Nikos Lorentzos
Temporal Data Management: Research Review and Solution
Commercial Database Management Systems do not support directly temporal data, i.e. data that changes with respect to time. On a first glance, this seems to be a trivial problem but elementary experimentation suffices to show that the management of temporal data has many peculiarities. Due to this, much research has been undertaken for more than one decade and quite a lot of temporal data models have been proposed. The current presentation identifies some of the problems related to the handling of temporal data and reviews relevant research work. It also presents the formalization of a relational algebra that enables overcoming all the problems. They operations defined are closed and general, in that they enable the processing of what can be called interval data. Follow up research has shown that the same set of operations enables the uniform management of temporal, spatial and spatio-temporal data.

Nikos Lorentzos received a first degree in Mathematics (Athens University, 1975), a Master's degree (Computer Science, Queens College, CUNY, 1981), and a Ph.D. (Computer Science, Birkbeck College, London University, 1988). He is known for his research in temporal databases, where he formalized a temporal relational algebra and defined IXSQL, an Interval Extended SQL. Work on this extension is currently a potential ISO standard. Citations to his published work, some with positive comments, exceed 100. His research results have appeared in books of international publicity. His work has stimulated subsequent research at various universities and organizations. In subsequent research, he defined a data model that enables the uniform management of temporal, spatial and spatio-temporal data. He has participated in many research projects, has acted as evaluator of programs submitted for funding to the European Union and as Ph.D. examiner at various European universities. He serves as a reviewer for international journals and conferences. He is a co-author of the book Temporal Data and the Relational Model (with C. Date and H. Darwen). Today he is an Associate Professor at the Agricultural University of Athens, Informatics Laboratory. His major research interests are temporal, interval, and spatial databases, and image processing.

Mas información: Nikos A. Lorentzos (Informatics Laboratory, Agricultural University of Athens)

Keynote 5: Prof. Alain Colmerauer
Complexity of Universal Programs
The consumer knows: when he buys a computer, the design features matter little, what matters are the supplied software programs. These programs alone decide the possible uses of the box with its keyboard and screen. The cleverness of the machine lies in its software, that is to say of programs P which generates outputs y from inputs x. Then, what is the program which makes the machine really clever? Answer: a universal program U, which takes, as input, any program P followed by an input x for P, and generates the same output as the one generated by P from x. In short, a universal program enable the machine to manipulate programs as data and to compute the outputs of their executions: the instructions leaflet for the input is inside the input and the executed program is always the same!

The object of this talk is the complexity of such a universal program U. As measure for this complexity we could take the size of U, but being interested in the efficiency of U we will take a ratio c of numbers of executed instructions.

Let us make a remark before exactly defining c. A universal program U being a particular case of a program P, let x be an input for U. If one executes U on an input made from U followed by x then, according to our universality definition, one obtains the same output as the one obtained by directly executing P on x. More generally if one executes U on U repeated n times, followed by x, one obtains the same output as the one obtained by directly executing U on x, but one executes a considerably greater number f(n,x) of instructions. It is then natural to define the complexity of U as the limit c of the ratio f(n+1,x)/f(n,x) when n becomes infinitely large. This ratio c is in some sense the average number of instructions which U executes for simulating one of its instruction. By letting n approach infinity, we obtain a value c which, under few assumptions, does not depend on x. We present a universal program of complexity c:
  • equal to 26.27, for a machine which idealizes a classical computer, (but with an infinite number of registers each one containing a non-bounded integer),
  • equal to 3672.98, for a 4 symbols Turing machine (the blank symbol included) with one bi-infinite tape, the Turing machine being one of the simplest idealized machine.
We end by explaining how to compute c and justify its existence.

Alain Colmerauer is Professor at the University of Marseille (Université de la Méditerrannée). He recieved a "doctorat d'état" at the University of Grenoble in 1967. He is known for his research on natural langage processing, the design of Prolog I, II, II and IV and his research on constraint solving in various domains.

Mas información: Alain Colmerauer (Université de la Méditerranée)                                                                                                                                                                                                        
13 20
5 20
1 10
Accesos desde 14/02/2003:

Última actualización: 17/12/2018    Información de contacto:    Mantenimiento web: Sergio Luján Mora