A Calculation Carrying Program System - Yicho -
Information Access based on Association - GETA (Generic Engine for Transposable Association) -
Physics of (not necessarily quantum) computation
Abstract: Computation is physical, the late Dr.Landauer claimed in 1980. It might be biassed to the physicist side but was an good antitthesis to computer science which is too much apart from the physical aspect of the computing entity. Later in 1994, the discovery of Show's algorithm reminded the fact that the physics reigns the computer would affect its performance. However, there has been a stream of the studes of physics of computation from late 50's if not plentiful. This talk introduce you its brief history in the speaker's point of view with some original result as follows. 1)Statistical physics and Information 2)Quantum measurement and Information 3)Physical cost of computation irreversibility and thermodynamical limit to computation 4)Reversible computation and the power quantum computers
Fusing XML functions
Abstract: Recent growing popularity in XML documents has indicated that there are strong demantds in manipulating large-sized data objects. This implies that realizing fusion mechanism toward XML functions, which eliminates intermediate data structures passed between two functions, has a big impact on performance. In this talk we will present our on-going work ---ideas and problems--- on this transformation.
Compiling Real Time Functional Reactive Programming to Automata
Most of the past languages for reactive systems are based on synchronous dataflow. Recently, a new reactive language, called Real-Time Functional Reactive Programming (RT-FRP), has been proposed based on the functional paradigm. The unique feature of this language is the high-level abstraction provided in the form of behaviours for conti-nuous-time signals, and events for discrete-time signals. RT-FRP also features some performance guarantees in the form of bounded runtime and space usage for each reactive computation step.
We propose to a new compilation scheme for RT-FRP. Our compilation scheme is based on two key stages. In the first stage, we translate RT-FRP program to an intermediate functional code. This code is deliberately kept at a high level for two reasons. First, it is easier for us to validate for correctness. Second, it allows us to apply high-level source-to-source transformation as a basis of further optimization. The second stage attempts to compile the intermediate code to a corresponding automata code. Our main novelty here is the use of two high-level transformation techniques for this compilation. The first technique used, called partial evaluation, attempts to propagate constant values (where feasible) in order to perform more aggressive specialization. The second technique used, called tupling, is for combining mutual dependent automata together into a composite automaton, where possible. Both these techniques are needed for generating fast target code for RT-FRP.
ソフトウェア開発へのモデル検査技法の応用
ソフトウェア開発の上流工程で早期に効率良く 対象システムの不具合を発見する手段として、 モデル検査技法に注目が集まり、いろいろな応用 が増えている。コンポーネント基盤やWebサービス の検証に関する適用事例を紹介する。
Component Analysis via Size and Set Constraints
Many program optimizations and analyses, such as array-bounds checking, termination analysis, depend on knowing the size of a function's input and output. However, size information can be difficult to compute. Firstly, accurate size computation requires detecting a size relation between different inputs of a function. Secondly, size information may be contained within components of data structures.
In this talk, we highly some techniques to derive universal and existential size properties over components of recursive data structures. We shall show how our analysis may be supported by size and set constraints.
Implementing Staged Computation
Staged computation is a key to partial evaluation and run-time code generation. In this talk, we present an approach to implementing staged computation through the use of guarded recursive (g.r.) datatype constructors. We first show how g.r. datatype constructors can be used to capture the notion of code (in the deBruijn notation) and then present rules for typing staged programs. We also provide realistic examples in support of this simple and effective approach to staged computation, which can readily be made available in practice.
Haskell Templates: User-directed compile-time computation
We propose a new extension to the pure functional programming language Haskell. The extension allows the programmer to direct a compile-time preprocessor phase that allows arbitrary manipulation of Haskell code fragments. These fragments include the entire Haskell language. The purpose of the system is to support the algorithmic construction of programs at compile-time. We envision using the system to implement such features as polytypic programs, macro-like expansion, and the derivation of supporting data-structures and functions from descriptions of existing data-structures. Program fragments are represented using Haskell's ordinary algebraic data types, but are also supported by a sophisticated quasi-quote presentation mechanism. This mechanism supports automatic alpha-renaming to avoid unintended capture, cross-stage persistence (variables representing run- time functions can be mentioned in the code fragments and will be correctly bound), and just-in-time type checking of constructed program fragments. The system also contains a reification mechanism, whereby any run-time function or data type definition can be reified -- i.e. a datastructure of its representation can be obtained and inspected by the compile-time functions. We demonstrate the power of such a system with a simple prototype we have constructed called MetaHaskell.
The Girard-Reynolds Isomorphism
The second-order polymorphic lambda calculus, F2, was independently discovered by Girard and Reynolds. Girard additionally proved a \emph{Representation Theorem}: every function on natural numbers that can be proved total in second-order intuitionistic predicate logic, P2, can be represented in F2. Reynolds additionally proved an \emph{Abstraction Theorem}: for a suitable notion of logical relation, every term in F2 takes related arguments into related results. We observe that the essence of Girard's result is a projection from P2 into F2, and that the essence of Reynolds's result is an embedding of F2 into P2, and that the Reynolds embedding followed by the Girard projection is the identity. The Girard projection discards all first-order quantifiers, so it seems unreasonable to expect that the Girard projection followed by the Reynolds embedding should also be the identity. However, we show that in the presence of Reynolds's \emph{parametricity} property that this is indeed the case, for propositions corresponding to inductive definitions of naturals or other algebraic types.
Optimization of Skeletal Parallel Programs
Abstract: Parallel skeletons are designed to encourage programmers to build parallel programs from ready-made components for which efficient implementations are known to exist, making both parallel programming and parallelization process simpler. However, because parallel programming relies on a set of parallel primitive skeletons for specifying parallelism, it is often hard for programmers to choose proper ones and to integrate them well in order to develop efficient parallel programs to solve their problems, and it remains as a challenge how to systematically optimize skeletal parallel programs.
In this talk, I would like to present a calculational framework supporting efficient parallel programming using skeletons, showing (1) how to design a more general and efficient parallel skeleton so that programmers can use it easier to solve their own problems; (2) how to fuse composition of skeletons in a cheap and systematic way; (3) how to flatten nested skeletons so that irregular parallelism can be exposed; and (4) how to implement an environment supporting skeletal parallel programming. Being more constructive, our method is not only helpful in design of efficient parallel programs in general but also promising in construction of parallelizing compiler.
Efficient Implementation of Parallel Tree Skeletons on Distributed Systems
Abstract: Trees are useful data types, but developing efficient parallel programs manipulating trees is known to be difficult, because of their irregular and imbalance structure. ``Skeletal Parallelism'' was proposed to encourage programmers to build parallel programs from ready-made components (skeletons). However, on distributed systems, there were no efficient implementations of parallel tree skeletons. In this talk, we would like to present an implementation of tree skeletons that runs efficiently on distributed systems. We adopt the m-bridge technique to partition trees into connected components, and partially evaluate each component in parallel by using variables that represent the results on other processors. In the experiments, our implementation is practically useful even for imbalanced trees. We would like to discuss our ongoing work, too.
Self-centered but cooperative behavior in a complex competitive situation
In this presentation, we analyze a simple adaptive model of competition called the "Minority Game" that is used in analyzing the competition phenomena in markets. The Minority Game consists of many simple autonomous agents, and self-organization occurs as a result of simple behavioral rules. Up to now, the dynamics of this game have been studied from various angles, but so far the focus has been on the macroscopic behavior of all the agents as a whole. We are interested in the mechanisms involved in collaborative behavior among multi-agents, so we focused our attention on the behavior of individual agents. In this paper, we suggest that the core elements responsible for self-organization to occur are: (i) rules of the game that potentially include a mechanism for a form of self-organization, (ii) rules that place a good constraint on each agent's behavior, and (iii) the existence of some rule that leads to indirect coordination; a process called "stigmergy." Keywords: The Minority Game, multi-agent, competition model, stigmergy.
A compositional framework for mining longest ranges
Data mining is variety of techniques for discovering valuable information from large collection of data by identifying and extracting models and patterns. In this talk, we address a new kind of mining problem related with ranges, and present a compositional framework for mining this kinds of interesting ranges from huge databases. In which, we propose a domain specific query langauge to specify the ranges of interest, and give a general algorithm to mine the ranges specifed in our language efficiently.
A Tree Automata Theory for Unification Modulo Equational Rewriting
Abstract: This talk is about my recent work on equational tree automata. I will present in this talk several (relatively :-) practical subjects of this research, mainly my talk of UNIF2002 in FLoC (Copenhagen). For those who have theoretical interests, please consult my papers of RTA2002 and CSL2001 linked on my homepage.
Decidable subclasses of TRSs which effectively preserve recognizability
Abstract: We first introduce the class of term rewriting systems (TRS) which effectively preserve recognizability. The class is called EPR-TRS. Next, we show some useful properties of EPR-TRS. Since it is undecidable whether a given TRS is in EPR-TRS or not, some decidable subclasses of EPR-TRS have been proposed. Among them, we proposed (1) right-linear finitely path-overlapping TRSs and (2) layered transducing TRSs. In this talk, we show brief ideas of the classes. Finally, some applications of EPR-TRS are given.