【University of Tokyo 】 PIE Parallel Inference Engine

With the start of the Fifth Generation Computer Systems project (a Ministry of International Trade and Industry initiative) in 1982, researchers Tanaka Hidehiko and Moto-Oka Tohru at University of Tokyo's Faculty of Engineering began researching parallel logic programming languages and the machines to process them. After developing several inference processing system prototypes, the team began in 1986 developing the Parallel Inference Engine (PIE), a machine specialized for processing Fleng, a simplified version of the Guarded Horn Clauses (GHC) language. The hardware was completed in 1993 and PIE started operation in 1994.

The PIE 64 consisted of 64 inference units connected by two distribution networks. Each distribution network was an eight-bit 64-port gamma network. Each inference unit signaled its current workload so that work could be transferred to the inference unit with the least loading. The inference units were equipped with UNIRED, which accelerated Unification and Reduce execution for logical programming language processing. The local memory of each inference unit could be accessed over the distribution networks from other inference units. A network interface processor (NIP) was included in each inference unit.

PIE also deserves special mention for its operating system, which had a function that adaptively optimized processing according to the system's operating conditions and the parallelism of the program's execution, and HyperDEBU, a debugger that logically traced the whereabouts of bugs. One PIE unit is kept at the National Museum of Nature and Science, Tokyo.

Compiled from pp.198-199, "The History of Japanese Computers", edited by the Special Committee for the History of Computing, IPSJ. Ohmsha, 2010.


  
University of Tokyo's PIE parallel inference engine