University of Minnesota
University Relations
http://www.umn.edu/urelate
612-624-6868
myU OneStop


Go to unit's home.

Home | Seminars and Symposia | Past seminars/symposia: Wednesday, October 15, 2003

DTC Seminar Series

Exploiting Incorrectly Speculated Memory Operations in a Concurrent Multithreaded Architecture (Plus a Few Thoughts on Simulation Methodology)

by

David J. Lilja
University of Minnesota
Electrical and Computer Engineering

Wednesday, October 15, 2003
1:00 pm

402 Walter Library

David J. LiljaDownload slides (pdf 1.23 MB) Concurrent multithreaded computer architectures exploit both instruction-level and thread-level parallelism through a combination of branch prediction and thread-level control speculation. The resulting speculative issuing of load instructions can significantly impact the performance of the memory hierarchy as the system exploits higher degrees of parallelism. This work shows that the execution of loads from incorrectly predicted branch paths within a thread, or from incorrectly forked threads, can result in an indirect prefetching effect for later correctly predicted execution paths. By continuing to execute mispredicted load instructions even after they are known to be no longer needed by the correct execution path, the processor can reduce the number of cache misses on the correct path by approximately 40-70%. This reduction in cache misses produces an average overall speedup of about 10% on the benchmark programs tested. Simulation has become extraordinarily popular in computer architecture research, providing the basis for nearly 70-80% of the papers published in the top computer architecture conferences. Despite this dependence on simulators, however, the computer architecture research community lacks a consensus on what constitutes a sound simulation methodology. The second part of this talk will focus on some of the shortcomings of current simulation methodologies and will propose a more rigorous technique for the setup and analysis phases of the simulation process. Several case studies using this technique will be presented, including how to identify key processor parameters, how to classify benchmark programs according to their overall impact on the processor, and how to analyze the effect of a processor enhancement.

 

David J. Lilja received the Ph.D. and M.S. degrees, both in Electrical Engineering, from the University of Illinois at Urbana-Champaign, and a B.S. in Computer Engineering from Iowa State University in Ames. He is currently a Professor of Electrical and Computer Engineering, and a Fellow of the Minnesota Supercomputing Institute, at the University of Minnesota. He has been a visiting senior engineer in the Hardware Performance Analysis group at IBM in Rochester, Minnesota, and a visiting professor at the University of Western Australia in Perth supported by a Fulbright award. Previously, he worked as a research assistant at the Center for Supercomputing Research and Development at the University of Illinois, and as a development engineer at Tandem Computers Incorporated (now a division of HP/Compaq) in Cupertino, California. He has served on the program committees of numerous conferences; was a distinguished visitor of the IEEE Computer Society, is a Senior member of the IEEE and a member of the ACM; and is a registered Professional Engineer in Electrical Engineering in Minnesota and California. His primary research interests are in high-performance computer architecture, parallel computing, hardware-software interactions, nanocomputing, and performance analysis.